Closed Lzshanshan closed 2 years ago
Hi, can you tell me what version of PyTorch you are using?
Thank you for reply.
Pytorch version is 1.9.1.
I also tried run it on Ubuntu18, and the version of Pytorch is 1.6.0. The ERROR is same:
loading checkpoint... ./checkpoints/scannet.pt
Using cache found in /home/lzs/.cache/torch/hub/rwightman_gen-efficientnet-pytorch_master
Loading base model ()...Done.
Removing last two layers (global_pool & classifier).
Traceback (most recent call last):
File "test.py", line 97, in
Now, I try to directly load checkpoint in test.py like this:
""" checkpoint = './checkpoints/%s.pt' % args.pretrained print('loading checkpoint... {}'.format(checkpoint)) """ checkpoint = 'tf_efficientnet_b5_ap-9e82fae8.pth' model = NNET(args).to(device) model = utils.load_checkpoint(checkpoint, model) model.eval() print('loading checkpoint... / done')
And, I modify the load_checkpoint(fpath, model) like this:
def load_checkpoint(fpath, model): ckpt = torch.load(fpath) model.load_state_dict(ckpt) """ ckpt = torch.load(fpath, map_location=torch.device('cpu'))['model'] loaddict = {} for k, v in ckpt.items(): if k.startswith('module.'): k = k.replace('module.', '') loaddict[k] = v else: load_dict[k] = v model.load_state_dict(load_dict) """ return model
I get the output:
Loading base model ()...Using cache found in C:\Users\LZS/.cache\torch\hub\rwightman_gen-efficientnet-pytorch_master
Done.
Removing last two layers (global_pool & classifier).
Traceback (most recent call last):
File "test.py", line 98, in
Therefore, I modify the keys' names of ckpt directly to match NNET. Then, Encoder part is correct but Decoder is also wrong:
Loading base model ()...Using cache found in C:\Users\LZS/.cache\torch\hub\rwightman_gen-efficientnet-pytorch_master
Done.
Removing last two layers (global_pool & classifier).
Traceback (most recent call last):
File "test.py", line 98, in
I've encountered the same issue. It has to do with the download script. Maybe Google updated something which broke the script. Basically what's happening is that you do not download the model but an HTML document stating a warning that this file is too large for Google's anti-virus scan. You can bypass this by just downloading model directly from https://drive.google.com/open?id=X and by replacing X with the id from the download.py file. Hope this helps.
@a1ex90 Thank you for the comment. It seems like download.py file is not working anymore. @Earmus Please try downloading the models and images from the following link and let me know if this fixes the issue.
https://drive.google.com/drive/folders/1Ku25Am69h_HrbtcCptXn4aetjo7sB33F?usp=sharing
@a1ex90 Thank you for your suggestion!
@baegwangbin Thank you for the download link! It works!
Hello,
When I run python test.py --pretrained scannet --architecture BN on Win10, it shows this:
loading checkpoint... ./checkpoints/scannet.pt Loading base model ()...Using cache found in C:\Users\LZS/.cache\torch\hub\rwightman_gen-efficientnet-pytorch_master Done. Removing last two layers (global_pool & classifier). Traceback (most recent call last): File "test.py", line 97, in
model = utils.load_checkpoint(checkpoint, model)
File "F:\surface_normal_uncertainty\utils\utils.py", line 57, in load_checkpoint
ckpt = torch.load(fpath, map_location=lambda storage, loc: storage)['model']
File "F:\Anaconda3\lib\site-packages\torch\serialization.py", line 608, in load
return _legacy_load(opened_file, map_location, pickle_module, pickle_load_args)
File "F:\Anaconda3\lib\site-packages\torch\serialization.py", line 777, in _legacy_load
magic_number = pickle_module.load(f, pickle_load_args)
_pickle.UnpicklingError: invalid load key, '<'.
I have tried directly downloading the model file, tf_efficientnet_b5_ap-9e82fae8.pth, from https://zzun.app/repo/rwightman-pytorch-image-models-python-deep-learning#releases, and use it to replace the original one.
I checked their file size and they are all over 100M.
Could you please give me some idea?