KupynOrest / DeblurGAN

Image Deblurring using Generative Adversarial Networks
Other
2.48k stars 513 forks source link

RuntimeError: Error(s) in loading state_dict for ResnetGenerator: Unexpected key(s) #168

Open Han-Huaqiao opened 4 years ago

Han-Huaqiao commented 4 years ago

Hello! I have a pronlem when I run 'python test.py --dataroot /.path_to_your_data --model test --dataset_mode single --learn_residual' . It about "Traceback (most recent call last): File "test.py", line 21, in model = create_model(opt) File "/home/hanhuaqiao/image/rt/project/Deblurring/DeblurGAN/models/models.py", line 8, in create_model model = TestModel( opt ) File "/home/hanhuaqiao/image/rt/project/Deblurring/DeblurGAN/models/test_model.py", line 21, in init self.load_network(self.netG, 'G', which_epoch) File "/home/hanhuaqiao/image/rt/project/Deblurring/DeblurGAN/models/base_model.py", line 58, in load_network network.load_state_dict(torch.load(save_path)) File "/home/hanhuaqiao/miniconda3/envs/DeblurGAN/lib/python3.5/site-packages/torch/nn/modules/module.py", line 719, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for ResnetGenerator: Unexpected key(s) in state_dict: "model.10.conv_block.6.weight", "model.10.conv_block.6.bias", "model.10.conv_block.7.running_mean", "model.10.conv_block.7.running_var", "model.11.conv_block.6.weight", "model.11.conv_block.6.bias", "model.11.conv_block.7.running_mean", "model.11.conv_block.7.running_var", "model.12.conv_block.6.weight", "model.12.conv_block.6.bias", "model.12.conv_block.7.running_mean", "model.12.conv_block.7.running_var", "model.13.conv_block.6.weight", "model.13.conv_block.6.bias", "model.13.conv_block.7.running_mean", "model.13.conv_block.7.running_var", "model.14.conv_block.6.weight", "model.14.conv_block.6.bias", "model.14.conv_block.7.running_mean", "model.14.conv_block.7.running_var", "model.15.conv_block.6.weight", "model.15.conv_block.6.bias", "model.15.conv_block.7.running_mean", "model.15.conv_block.7.running_var", "model.16.conv_block.6.weight", "model.16.conv_block.6.bias", "model.16.conv_block.7.running_mean", "model.16.conv_block.7.running_var", "model.17.conv_block.6.weight", "model.17.conv_block.6.bias", "model.17.conv_block.7.running_mean", "model.17.conv_block.7.running_var", "model.18.conv_block.6.weight", "model.18.conv_block.6.bias", "model.18.conv_block.7.running_mean", "model.18.conv_block.7.running_var". " Help me,thanks!

LoveHeartFly commented 4 years ago

I can run it successful! My setting environment is ubuntu18.04+cuda10.0+cudnn7+pytorch1.2.0+python3.7+visdom+pyssim. You need to change base_dataset.py, custom_dataset_data_loader.py and test_model.py folders. I will give you a link, which tell you how to modify them. Here is https://github.com/KupynOrest/DeblurGAN/pull/138/files . If you have this problem (Failed to establish a new connection: [Errno 111] Connection refused), you need to startup the visdom server by using " python -m visdom.server" and opening localhost website. It will be resolved. Note: you must use author's default weights G.pth and D.pth (When you download DeblurGAN-master, G and D in /checkpoints/experiment_name/), not use weights D and G downloading from Google Drive (It will be running with pytorch0.2.0+cuda8.0+python3.6). It will be OK. This direct ./path_to_your_data is your ./testimage. Then, directory structure is

To test a model put your blurry and sharp images into a testimage folder and run: python test.py --dataroot ./testimage --model test --dataset_mode single --learn_residual --resize_or_crop scale_width You will see a results folder generated.

YXKN commented 4 years ago

Note: you must use author's default weights G.pth and D.pth (When you download DeblurGAN-master, G and D in /checkpoints/experiment_name/), not use weights D and G downloading from Google Drive (It will be running with pytorch0.2.0+cuda8.0+python3.6). It will be OK. This direct ./path_to_your_data is your ./testimage. Then, directory structure is

  • ..........
  • train.py
  • test.py
  • /testimage ------- /blurred --------------- /blurredimage1.png --------------- /.......... ------- /sharp --------------- /sharpimage1.png --------------- /..........

To test a model put your blurry and sharp images into a testimage folder and run: python test.py --dataroot ./testimage --model test --dataset_mode single --learn_residual --resize_or_crop scale_width You will see a results folder generated. @LoveHeartFly I created folders like you said, but it looks like the program process the pictures in the two folders separately. And images don't look deblurred. Here is my foler. /testPic ------/blurred ---------------/blurredYolo.jpg ------/sharp ---------------/sharpYolo.jpg And my result. result

Could you please help me.

backsmar commented 4 years ago

Note: you must use author's default weights G.pth and D.pth (When you download DeblurGAN-master, G and D in /checkpoints/experiment_name/), not use weights D and G downloading from Google Drive (It will be running with pytorch0.2.0+cuda8.0+python3.6). It will be OK. This direct ./path_to_your_data is your ./testimage. Then, directory structure is

  • ..........
  • train.py
  • test.py
  • /testimage ------- /blurred --------------- /blurredimage1.png --------------- /.......... ------- /sharp --------------- /sharpimage1.png --------------- /..........

To test a model put your blurry and sharp images into a testimage folder and run: python test.py --dataroot ./testimage --model test --dataset_mode single --learn_residual --resize_or_crop scale_width You will see a results folder generated. @LoveHeartFly I created folders like you said, but it looks like the program process the pictures in the two folders separately. And images don't look deblurred. Here is my foler. /testPic ------/blurred ---------------/blurredYolo.jpg ------/sharp ---------------/sharpYolo.jpg And my result. result

Could you please help me.

“Note: you must use author's default weights G.pth and D.pth (When you download DeblurGAN-master, G and D in /checkpoints/experiment_name/), not use weights D and G downloading from Google Drive” that is the right answer!!

LIMENG0307 commented 4 years ago

I can run it successful! My setting environment is ubuntu18.04+cuda10.0+cudnn7+pytorch1.2.0+python3.7+visdom+pyssim. You need to change base_dataset.py, custom_dataset_data_loader.py and test_model.py folders. I will give you a link, which tell you how to modify them. Here is https://github.com/KupynOrest/DeblurGAN/pull/138/files . If you have this problem (Failed to establish a new connection: [Errno 111] Connection refused), you need to startup the visdom server by using " python -m visdom.server" and opening localhost website. It will be resolved. Note: you must use author's default weights G.pth and D.pth (When you download DeblurGAN-master, G and D in /checkpoints/experiment_name/), not use weights D and G downloading from Google Drive (It will be running with pytorch0.2.0+cuda8.0+python3.6). It will be OK. This direct ./path_to_your_data is your ./testimage. Then, directory structure is

  • ..........
  • train.py
  • test.py
  • /testimage ------- /blurred --------------- /blurredimage1.png --------------- /.......... ------- /sharp --------------- /sharpimage1.png --------------- /..........

To test a model put your blurry and sharp images into a testimage folder and run: python test.py --dataroot ./testimage --model test --dataset_mode single --learn_residual --resize_or_crop scale_width You will see a results folder generated.

I met the same problem,maybe it is my ‘env’ cause this.

fyb99 commented 3 years ago

I can run it successful! My setting environment is ubuntu18.04+cuda10.0+cudnn7+pytorch1.2.0+python3.7+visdom+pyssim. You need to change base_dataset.py, custom_dataset_data_loader.py and test_model.py folders. I will give you a link, which tell you how to modify them. Here is https://github.com/KupynOrest/DeblurGAN/pull/138/files . If you have this problem (Failed to establish a new connection: [Errno 111] Connection refused), you need to startup the visdom server by using " python -m visdom.server" and opening localhost website. It will be resolved. Note: you must use author's default weights G.pth and D.pth (When you download DeblurGAN-master, G and D in /checkpoints/experiment_name/), not use weights D and G downloading from Google Drive (It will be running with pytorch0.2.0+cuda8.0+python3.6). It will be OK. This direct ./path_to_your_data is your ./testimage. Then, directory structure is

  • ..........
  • train.py
  • test.py
  • /testimage ------- /blurred --------------- /blurredimage1.png --------------- /.......... ------- /sharp --------------- /sharpimage1.png --------------- /..........

To test a model put your blurry and sharp images into a testimage folder and run: python test.py --dataroot ./testimage --model test --dataset_mode single --learn_residual --resize_or_crop scale_width You will see a results folder generated.

@LoveHeartFly The first time I run the command you said, the corresponding result appears, but after running it again, it will not produce the results folder. Also, I would like to ask that after running, sharp and blurred images generate real A and fake B. Which of these is the result of deblurring the blurred image?