xinntao / EDVR

Winning Solution in NTIRE19 Challenges on Video Restoration and Enhancement (CVPR19 Workshops) - Video Restoration with Enhanced Deformable Convolutional Networks. EDVR has been merged into BasicSR and this repo is a mirror of BasicSR.
https://github.com/xinntao/BasicSR
1.5k stars 318 forks source link

RuntimeError: Error(s) in loading state_dict for EDVR #123

Open NasrinR791 opened 4 years ago

NasrinR791 commented 4 years ago

Hi everyone, I have trained the EDVR with this log for debluring task:

19-11-04 13:38:42.765 - INFO: name: 001_EDVRwoTSA_scratch_lr4e-4_600k_REDS_LrCAR4S use_tb_logger: True model: video_base distortion: sr scale: 4 gpu_ids: [0] datasets:[ train:[ name: REDS mode: REDS interval_list: [1] random_reverse: False border_mode: False dataroot_GT: /media/ml/datadrive2/Nasrin/EDVR-master/dataset/REDS/train_sharp_wval.lmdb dataroot_LQ: /media/ml/datadrive2/Nasrin/EDVR-master/dataset/REDS/train_blur_wval.lmdb cache_keys: None N_frames: 5 use_shuffle: True n_workers: 3 batch_size: 16 GT_size: 256 LQ_size: 256 use_flip: True use_rot: True color: RGB phase: train scale: 4 data_type: lmdb ] ] network_G:[ which_model_G: EDVR nf: 64 nframes: 5 groups: 8 front_RBs: 5 back_RBs: 10 predeblur: True HR_in: True w_TSA: True scale: 4 ] path:[ pretrain_model_G: None strict_load: True resume_state: None root: /media/ml/datadrive2/Nasrin/EDVR-master experiments_root: /media/ml/datadrive2/Nasrin/EDVR-master/experiments/001_EDVRwoTSA_scratch_lr4e-4_600k_REDS_LrCAR4S models: /media/ml/datadrive2/Nasrin/EDVR-master/experiments/001_EDVRwoTSA_scratch_lr4e-4_600k_REDS_LrCAR4S/models training_state: /media/ml/datadrive2/Nasrin/EDVR-master/experiments/001_EDVRwoTSA_scratch_lr4e-4_600k_REDS_LrCAR4S/training_state log: /media/ml/datadrive2/Nasrin/EDVR-master/experiments/001_EDVRwoTSA_scratch_lr4e-4_600k_REDS_LrCAR4S val_images: /media/ml/datadrive2/Nasrin/EDVR-master/experiments/001_EDVRwoTSA_scratch_lr4e-4_600k_REDS_LrCAR4S/val_images ] train:[ lr_G: 0.0004 lr_scheme: CosineAnnealingLR_Restart beta1: 0.9 beta2: 0.99 niter: 600000 warmup_iter: -1 T_period: [150000, 150000, 150000, 150000] restarts: [150000, 300000, 450000] restart_weights: [1, 1, 1] eta_min: 1e-07 pixel_criterion: cb pixel_weight: 1.0 val_freq: 2000.0 manual_seed: 0 ] logger:[ print_freq: 10 save_checkpoint_freq: 2000.0 ] is_train: True dist: False

when I try to test the trained model on REDS4, this error occur :

(nasrin) ml@ml-HP-Z820-Workstation:/media/ml/datadrive2/Nasrin/EDVR-master/codes$ python test_Vid4_REDS4_with_GT.py 19-11-25 13:13:16.573 - INFO: Data: blur - /media/ml/datadrive2/Nasrin/EDVR-master/dataset/REDS4/blur 19-11-25 13:13:16.573 - INFO: Padding mode: replicate 19-11-25 13:13:16.573 - INFO: Model path: /media/ml/datadrive2/Nasrin/EDVR-master/experiments/001_EDVRwoTSA_scratch_lr4e-4_600k_REDS_LrCAR4S/models/latest_G.pth 19-11-25 13:13:16.573 - INFO: Save images: True 19-11-25 13:13:16.573 - INFO: Flip test: False Traceback (most recent call last): File "test_Vid4_REDS4_with_GT.py", line 208, in main() File "test_Vid4_REDS4_with_GT.py", line 111, in main model.load_state_dict(torch.load(model_path), strict=True) File "/home/ml/miniconda3/envs/nasrin/lib/python3.7/site-packages/torch/nn/modules/module.py", line 839, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for EDVR: Missing key(s) in state_dict: "recon_trunk.10.conv1.weight", "recon_trunk.10.conv1.bias", "recon_trunk.10.conv2.weight", "recon_trunk.10.conv2.bias", "recon_trunk.11.conv1.weight", "recon_trunk.11.conv1.bias", "recon_trunk.11.conv2.weight", "recon_trunk.11.conv2.bias", "recon_trunk.12.conv1.weight", "recon_trunk.12.conv1.bias", "recon_trunk.12.conv2.weight", "recon_trunk.12.conv2.bias", "recon_trunk.13.conv1.weight", "recon_trunk.13.conv1.bias", "recon_trunk.13.conv2.weight", "recon_trunk.13.conv2.bias", "recon_trunk.14.conv1.weight", "recon_trunk.14.conv1.bias", "recon_trunk.14.conv2.weight", "recon_trunk.14.conv2.bias", "recon_trunk.15.conv1.weight", "recon_trunk.15.conv1.bias", "recon_trunk.15.conv2.weight", "recon_trunk.15.conv2.bias", "recon_trunk.16.conv1.weight", "recon_trunk.16.conv1.bias", "recon_trunk.16.conv2.weight", "recon_trunk.16.conv2.bias", "recon_trunk.17.conv1.weight", "recon_trunk.17.conv1.bias", "recon_trunk.17.conv2.weight", "recon_trunk.17.conv2.bias", "recon_trunk.18.conv1.weight", "recon_trunk.18.conv1.bias", "recon_trunk.18.conv2.weight", "recon_trunk.18.conv2.bias", "recon_trunk.19.conv1.weight", "recon_trunk.19.conv1.bias", "recon_trunk.19.conv2.weight", "recon_trunk.19.conv2.bias", "recon_trunk.20.conv1.weight", "recon_trunk.20.conv1.bias", "recon_trunk.20.conv2.weight", "recon_trunk.20.conv2.bias", "recon_trunk.21.conv1.weight", "recon_trunk.21.conv1.bias", "recon_trunk.21.conv2.weight", "recon_trunk.21.conv2.bias", "recon_trunk.22.conv1.weight", "recon_trunk.22.conv1.bias", "recon_trunk.22.conv2.weight", "recon_trunk.22.conv2.bias", "recon_trunk.23.conv1.weight", "recon_trunk.23.conv1.bias", "recon_trunk.23.conv2.weight", "recon_trunk.23.conv2.bias", "recon_trunk.24.conv1.weight", "recon_trunk.24.conv1.bias", "recon_trunk.24.conv2.weight", "recon_trunk.24.conv2.bias", "recon_trunk.25.conv1.weight", "recon_trunk.25.conv1.bias", "recon_trunk.25.conv2.weight", "recon_trunk.25.conv2.bias", "recon_trunk.26.conv1.weight", "recon_trunk.26.conv1.bias", "recon_trunk.26.conv2.weight", "recon_trunk.26.conv2.bias", "recon_trunk.27.conv1.weight", "recon_trunk.27.conv1.bias", "recon_trunk.27.conv2.weight", "recon_trunk.27.conv2.bias", "recon_trunk.28.conv1.weight", "recon_trunk.28.conv1.bias", "recon_trunk.28.conv2.weight", "recon_trunk.28.conv2.bias", "recon_trunk.29.conv1.weight", "recon_trunk.29.conv1.bias", "recon_trunk.29.conv2.weight", "recon_trunk.29.conv2.bias", "recon_trunk.30.conv1.weight", "recon_trunk.30.conv1.bias", "recon_trunk.30.conv2.weight", "recon_trunk.30.conv2.bias", "recon_trunk.31.conv1.weight", "recon_trunk.31.conv1.bias", "recon_trunk.31.conv2.weight", "recon_trunk.31.conv2.bias", "recon_trunk.32.conv1.weight", "recon_trunk.32.conv1.bias", "recon_trunk.32.conv2.weight", "recon_trunk.32.conv2.bias", "recon_trunk.33.conv1.weight", "recon_trunk.33.conv1.bias", "recon_trunk.33.conv2.weight", "recon_trunk.33.conv2.bias", "recon_trunk.34.conv1.weight", "recon_trunk.34.conv1.bias", "recon_trunk.34.conv2.weight", "recon_trunk.34.conv2.bias", "recon_trunk.35.conv1.weight", "recon_trunk.35.conv1.bias", "recon_trunk.35.conv2.weight", "recon_trunk.35.conv2.bias", "recon_trunk.36.conv1.weight", "recon_trunk.36.conv1.bias", "recon_trunk.36.conv2.weight", "recon_trunk.36.conv2.bias", "recon_trunk.37.conv1.weight", "recon_trunk.37.conv1.bias", "recon_trunk.37.conv2.weight", "recon_trunk.37.conv2.bias", "recon_trunk.38.conv1.weight", "recon_trunk.38.conv1.bias", "recon_trunk.38.conv2.weight", "recon_trunk.38.conv2.bias", "recon_trunk.39.conv1.weight", "recon_trunk.39.conv1.bias", "recon_trunk.39.conv2.weight", "recon_trunk.39.conv2.bias".

I could not figure out what is wrong with it. Could you please help me to solve this problem? @xinntao

xinntao commented 4 years ago

From the log, You train the model with configuration back_RBs: 10, however, in the test configuration, you probably set back RBs to 40.

NasrinR791 commented 4 years ago

Dear @xinntao

You are right, thank you very much. The problem was solved by your suggestion. Actually, I have chosen the EDVR project as my course project and I want to understand the method and get the results in your paper ( or as close as possible). Now I want to train a full EDVR for SR step by step according to your suggestion in Issue #91 ( to avoid unstable offset mean): C64B10woTSA -> C128B10woTSA -> C128B40woTSA -> C128B40wTSA but when I use the trained model of step one as the pre-trained model of step two, because of different nf again I face this error:

RuntimeError: Error(s) in loading state_dict for EDVR: size mismatch for conv_first.weight: copying a param with shape torch.Size([64, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3])....

In the log of both steps I used (strict_load: True), I can not figure out which parts I should manipulate to match the dimensions. I apologize if I ask stupid questions because I am new to python language and understanding your code is a little hard for me. Thank you very much for your time and help.

Best regards, Nasrin

On Fri, Nov 29, 2019 at 4:59 PM Xintao notifications@github.com wrote:

From the log, You train the model with configuration back_RBs: 10, however, in the test configuration, you probably set back RBs to 40.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/xinntao/EDVR/issues/123?email_source=notifications&email_token=ANUIMR4ZFYSRFXZSZNJXYULQWEN5XA5CNFSM4JRJCBV2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFO5VII#issuecomment-559798945, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANUIMR7MLXQ25HCA76RNF43QWEN5XANCNFSM4JRJCBVQ .

TouqeerAhmad commented 4 years ago

I think in the yml file you should change 'strict_load' to false, that should resolve this issue.

NasrinR791 commented 4 years ago

Dear @TouqeerAhmad @xinntao Thank you very much for your suggestion. I changed the strict_load to false to prevent the mismatch, in both models(nf=64 and nf=128), but it didn't solve the problem. Do you have any other ideas?

Traceback (most recent call last): File "train.py", line 310, in main() File "train.py", line 130, in main model = create_model(opt) File "/media/ml/datadrive2/Nasrin/EDVR-master/codes/models/init.py", line 17, in create_model m = M(opt) File "/media/ml/datadrive2/Nasrin/EDVR-master/codes/models/Video_base_model.py", line 33, in init self.load() File "/media/ml/datadrive2/Nasrin/EDVR-master/codes/models/Video_base_model.py", line 163, in load self.load_network(load_path_G, self.netG, self.opt['path']['strict_load']) File "/media/ml/datadrive2/Nasrin/EDVR-master/codes/models/base_model.py", line 94, in load_network network.load_state_dict(load_net_clean, strict=False) File "/home/ml/miniconda3/envs/nasrin/lib/python3.7/site-packages/torch/nn/modules/module.py", line 839, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for EDVR: size mismatch for conv_first.weight: copying a param with shape torch.Size([64, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3]).

freefeynman123 commented 4 years ago

Probably it due to the fact that in test scripts there is EDVR model defined with different parameters than those that you used in yaml file. For example in test_Vid4_REDS4_with_GT.py it is defined as: model = EDVR_arch.EDVR(128, N_in, 8, 5, back_RBs, predeblur=predeblur, HR_in=HR_in) You should change these parameters to fit your yaml file.