sanghyun-son / EDSR-PyTorch

PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)
MIT License
2.41k stars 666 forks source link

checkpoint torch.Size #321

Open Wangxin25 opened 2 years ago

Wangxin25 commented 2 years ago

RuntimeError: While copying the parameter named head.0.weight, whose dimensions in the model are torch.Size([64, 3, 3, 3]) and whose dimensions in the checkpoint are torch.Size([256, 3, 3, 3]). How can I solve this problem?

qsibmini-khu commented 2 years ago

I have the same error, please let me know if you solve this problem. thank you.

haikunzhang95 commented 2 years ago

@Wangxin25 Hi, do you solve the problem? I have the same error. Thank U!

qsibmini-khu commented 2 years ago

@Wangxin25 Hi, do you solve the problem? I have the same error. Thank U!

Currently, the number of features is set to 64 for the model(which only works for the baseline code). In src->option.py, it is written as below.

parser.add_argument('--n_feats', type=int, default=64, help='number of feature maps')

You can change this to default=256 in option.py, or you can add it in demo.sh as --n_feats 256.

haikunzhang95 commented 2 years ago

@qsibmini-khu Wow,I've solved the error. Thank you very much.