hezhangsprinter / DCPDN

Densely Connected Pyramid Dehazing Network (CVPR'2018)
404 stars 112 forks source link

demo error #6

Open CDElite opened 6 years ago

CDElite commented 6 years ago

你好,我试了一下demo,就是 python demo.py --dataroot ./facades/nat_new4 --valDataroot ./facades/nat_new4 --netG ./demo_model/netG_epoch_8.pth
但是会出错。 Random Seed: 3661 /usr/local/lib/python2.7/dist-packages/torchvision-0.2.1-py2.7.egg/torchvision/transforms/transforms.py:191: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead. Traceback (most recent call last): File "demo.py", line 128, in netG = net.dehaze(inputChannelSize, outputChannelSize, ngf) File "/home/cdelite/DCPDN/DCPDN/dehaze22.py", line 537, in init self.tran_est=G(input_nc=3,output_nc=3, nf=64) File "/home/cdelite/DCPDN/DCPDN/dehaze22.py", line 88, in init layer2 = blockUNet(nf, nf*2, name, transposed=False, bn=True, relu=False, dropout=False) File "/home/cdelite/DCPDN/DCPDN/dehaze22.py", line 56, in blockUNet block.add_module('%s.leakyrelu' % name, nn.LeakyReLU(0.2, inplace=True)) File "/usr/local/lib/python2.7/dist-packages/torch-0.4.0-py2.7-linux-x86_64.egg/torch/nn/modules/module.py", line 169, in add_module raise KeyError("module name can't contain \".\"") KeyError: 'module name can\'t contain "."' 请教一下是什么原因?

hezhangsprinter commented 6 years ago

Hi Please install pytorch 0.3.1. https://pytorch.org/previous-versions/

mod1998 commented 6 years ago

so am I

mod1998 commented 6 years ago

have you settle down the problem?

hezhangsprinter commented 6 years ago

Some people suggest the following code. It may address the issue.

netG = net.dehaze(inputChannelSize, outputChannelSize, ngf)

model_dict = netG.state_dict() tmpname={} i=0 for k, v in model_dict.items(): tmpname[i]=k i=i+1 i=0 if opt.netG != '': state_dict=torch.load(opt.netG) from collections import OrderedDict new_state_dict = OrderedDict()
for k, v in state_dict.items(): name = tmpname[i] # update key i=i+1

new_state_dict[name] =v 

netG.load_state_dict(new_state_dict) print(netG)

Aleberello commented 6 years ago

Hi, if someone else need this piece of code, here's the correct indentation for python. The issue is related to the new pytorch version (4.0) wich changed the rules for names in nn.Module, they have migrated also the models from torchvision.models for that reason the demo doesn't work. Source: https://pytorch.org/2018/04/22/0_4_0-migration-guide.html

model_dict = netG.state_dict()
tmpname = {}
i=0
for k, v in model_dict.items():
    tmpname[i]=k
    i=i+1

i=0
if opt.netG != '':
    state_dict=torch.load(opt.netG)
    from collections import OrderedDict
    new_state_dict = OrderedDict()
    for k, v in state_dict.items():
        name = tmpname[i] # update key
        i=i+1
        new_state_dict[name] =v

    netG.load_state_dict(new_state_dict) 

Thanks @hezhangsprinter

hezhangsprinter commented 6 years ago

Thanks !!@Aleberello

monkiq commented 6 years ago

Thanks you @Aleberello and @hezhangsprinter

SherlockSunset commented 6 years ago

你好,问下楼主解决了这个问题吗?我按照作者给出的方法还是报这个错误。。

Gavin666Github commented 6 years ago

netG = net.dehaze(inputChannelSize, outputChannelSize, ngf) 报错从这开始,上述代码放在这里没用的

Tangyuny commented 5 years ago

亲测,加上代码后可以解决

noobgrow commented 5 years ago

那段代码加哪里??就像Gavin666Github说的,在后边加没用。 目前我只能把所有出现.的地方换成_,也能运行,不过作者说的是什么意思,谁能解答一下

QingyuGuo commented 5 years ago

我也把dehaze22.py的.换成了_,能运行,但是load key的时候报错,有些key没有load进去,有的load不对。求解决呀~ 放一小段代码 self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for dehaze: Missing key(s) in state_dict: 还会有不匹配的,比如 size mismatch for tran_dense.dense_block1.denselayer1.conv1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]). size mismatch for tran_dense.dense_block1.denselayer1.norm2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for tran_dense.dense_block1.denselayer1.norm2.bias: copying a param with shape torch.Size([128, 160, 1, 1]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for tran_dense.dense_block1.denselayer1.conv2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32, 128, 3, 3]). size mismatch for tran_dense.dense_block1.denselayer2.norm1.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for tran_dense.dense_block1.denselayer2.norm1.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]).

QingyuGuo commented 5 years ago

亲测,加上代码后可以解决

请问就是在 netG = net.dehaze(inputChannelSize, outputChannelSize, ngf) 这行之后把后面的改了吗?但是前面的netG还是报错啊?

ZhuanShan commented 5 years ago

I have just resolve this problem. That's because the module name has changed. What I did is going to dehaze22.py, and you can see block.add_module('%s.leakyrelu' % name, nn.LeakyReLU(0.2, inplace=True)) just change all the %s.leakyrelu or%s.relu to %s_leakyrelu or %s_relu I have change more than 10 places. hope that will help you. reference:https://github.com/taey16/pix2pixBEGAN.pytorch/issues/7

QingyuGuo commented 5 years ago

I have just resolve this problem. That's because the module name has changed. What I did is going to dehaze22.py, and you can see block.add_module('%s.leakyrelu' % name, nn.LeakyReLU(0.2, inplace=True)) just change all the %s.leakyrelu or%s.relu to %s_leakyrelu or %s_relu I have change more than 10 places. hope that will help you. reference:https://github.com/taey16/pix2pixBEGAN.pytorch/issues/7

I did the same way. But can you load the pretrained model correctly? I cannot load it.

ZhuanShan commented 5 years ago

I have just resolve this problem. That's because the module name has changed. What I did is going to dehaze22.py, and you can see block.add_module('%s.leakyrelu' % name, nn.LeakyReLU(0.2, inplace=True)) just change all the %s.leakyrelu or%s.relu to %s_leakyrelu or %s_relu I have change more than 10 places. hope that will help you. reference:https://github.com/taey16/pix2pixBEGAN.pytorch/issues/7

I did the same way. But can you load the pretrained model correctly? I cannot load it.

what's your vision of torch and torchvision? i loaded successfully with torch0.3.1 and torchvision0.1.8

yuchenlichuck commented 5 years ago

Hi! I also have this problem. I add the code like above, but the error was the same. How can I schedule it?

just-blank commented 5 years ago

I have just resolve this problem. That's because the module name has changed. What I did is going to dehaze22.py, and you can see block.add_module('%s.leakyrelu' % name, nn.LeakyReLU(0.2, inplace=True)) just change all the %s.leakyrelu or%s.relu to %s_leakyrelu or %s_relu I have change more than 10 places. hope that will help you. reference:https://github.com/taey16/pix2pixBEGAN.pytorch/issues/7

I did the same way. But can you load the pretrained model correctly? I cannot load it.

what's your vision of torch and torchvision? i loaded successfully with torch0.3.1 and torchvision0.1.8

After following what you suggested, I came across the new problem. image have you encountered this problem?could you give me some advice? thanks in advance.

xf-zh commented 5 years ago

我也把dehaze22.py的.换成了_,能运行,但是load key的时候报错,有些key没有load进去,有的load不对。求解决呀~ 放一小段代码 self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for dehaze: Missing key(s) in state_dict: 还会有不匹配的,比如 size mismatch for tran_dense.dense_block1.denselayer1.conv1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]). size mismatch for tran_dense.dense_block1.denselayer1.norm2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for tran_dense.dense_block1.denselayer1.norm2.bias: copying a param with shape torch.Size([128, 160, 1, 1]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for tran_dense.dense_block1.denselayer1.conv2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32, 128, 3, 3]). size mismatch for tran_dense.dense_block1.denselayer2.norm1.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for tran_dense.dense_block1.denselayer2.norm1.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]).

Hi, have you solved this problem?

Alisaxing commented 4 years ago

亲测,加上代码后可以解决

您好,这段代码应该加在哪里

ghost commented 4 years ago

我也把dehaze22.py的.换成了_,能运行,但是load key的时候报错,有些key没有load进去,有的load不对。求解决呀~ 放一小段代码 self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for dehaze: Missing key(s) in state_dict: 还会有不匹配的,比如 size mismatch for tran_dense.dense_block1.denselayer1.conv1.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([128, 64, 1, 1]). size mismatch for tran_dense.dense_block1.denselayer1.norm2.weight: copying a param with shape torch.Size([160]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for tran_dense.dense_block1.denselayer1.norm2.bias: copying a param with shape torch.Size([128, 160, 1, 1]) from checkpoint, the shape in current model is torch.Size([128]). size mismatch for tran_dense.dense_block1.denselayer1.conv2.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([32, 128, 3, 3]). size mismatch for tran_dense.dense_block1.denselayer2.norm1.weight: copying a param with shape torch.Size([32, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([96]). size mismatch for tran_dense.dense_block1.denselayer2.norm1.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([96]).

必须要用pytorch0.3.0的,同时torchvision的版本也不能超过0.4,安装的话只能通过链接下载或者源码安装

blackAndrechen commented 4 years ago

i meet the problem,and solved.share my method. 1.use @Aleberello code add netG = net.dehaze(inputChannelSize, outputChannelSize, ngf) raw behind 2.modify dehaze22.py change all add_module ("%s.") to addmodule("%s") for example

block.add_module('%s.relu' % name, nn.ReLU(inplace=True))
block.add_module('%s_relu' % name, nn.ReLU(inplace=True))
ghost commented 4 years ago

2020.2.17 Solution to load the pretrained model.

Step 1:

In the dehaze22.py file, change all the %s. to %s_ as @blackAndrechen 's comment.

Step 2:

Change the keys in the netG_epoch_8.pth model. I have a modified one, please download it here

Please contact me if you have any question about the pretrained model.

Step 3:

netG = net.dehaze(inputChannelSize, outputChannelSize, ngf)
netG.load_state_dict(torch.load('netG.pth'))

Please contact me if you have any question about the pre-trained model.

yinxuping commented 4 years ago

2020.2.17 Solution to load the pretrained model.

Step 1:

In the dehaze22.py file, change all the %s. to %s_ as @blackAndrechen 's comment.

Step 2:

Change the keys in the netG_epoch_8.pth model. I have a modified one, please download it here

Please contact me if you have any question about the pretrained model.

Step 3:

netG = net.dehaze(inputChannelSize, outputChannelSize, ngf)
netG.load_state_dict(torch.load('netG.pth'))

Please contact me if you have any question about the pre-trained model. thank u .I have meet this problem,and try the method you propose,It can sovle the prmblem RuntimeError: Error(s) in loading state_dict for dehaze: Missing key(s) in state_dict: But it comes new problem RuntimeError: set_sizes_contiguous is not allowed on a Tensor created from .data or .detach(). how can I sovle it ?there is one solution is to chang data.resizeas to resize_ ,but seems not work

yinxuping commented 4 years ago

2020.2.17 Solution to load the pretrained model.

Step 1:

In the dehaze22.py file, change all the %s. to %s_ as @blackAndrechen 's comment.

Step 2:

Change the keys in the netG_epoch_8.pth model. I have a modified one, please download it here Please contact me if you have any question about the pretrained model.

Step 3:

netG = net.dehaze(inputChannelSize, outputChannelSize, ngf)
netG.load_state_dict(torch.load('netG.pth'))

Please contact me if you have any question about the pre-trained model. thank u .I have meet this problem,and try the method you propose,It can sovle the prmblem RuntimeError: Error(s) in loading state_dict for dehaze: Missing key(s) in state_dict: But it comes new problem RuntimeError: set_sizes_contiguous is not allowed on a Tensor created from .data or .detach(). how can I sovle it ?there is one solution is to chang data.resizeas to resize_ ,but seems not work

I have solve the new question through changing data.resizeas to resizeas then I find my GPU is not enough ,only 6G but it needs 20G...Can anyone tell me how to change GPU to CPU ,thanks alot

ghost commented 4 years ago

@yinxuping

You just need to add one more parameter to the torch.load() function.

If map_location is a callable, it will be called once for each serialized storage with two arguments: storage and location. The storage argument will be the initial deserialization of the storage, residing on the CPU. Each serialized storage has a location tag associated with it which identifies the device it was saved from, and this tag is the second argument passed to map_location. The builtin location tags are 'cpu' for CPU tensors and 'cuda:device_id' (e.g. 'cuda:2') for CUDA tensors. map_location should return either None or a storage. If map_location returns a storage, it will be used as the final deserialized object, already moved to the right device. Otherwise, torch.load() will fall back to the default behavior, as if map_location wasn’t specified.

For example, use torch.load('netG.pth', map_location=torch.device('cpu'))

Good luck.

ghost commented 4 years ago

@yinxuping I do not understand why the model could be 20G? Is there anything wrong with your model?

hamddan4 commented 4 years ago

@acoder-fin your link to the updated model seems broken. Can you re-upload it, please?

ghost commented 4 years ago

@hamddan4 https://drive.google.com/file/d/111m-y0jO_8iU9F3hIE4nDDy-rCNvFDS2/view?usp=sharing Here is the new link.

shuowoshishui commented 4 years ago

@acoder-fin it can open

shuowoshishui commented 4 years ago

@acoder-fin your link to the updated model seems broken. Can you re-upload it, please?

ghost commented 4 years ago

@acoder-fin your link to the updated model seems broken. Can you re-upload it, please?

Hi, please check the link above. I have upgraded it. Good luck!

shuowoshishui commented 4 years ago

@acoder-fin,thank you for your help, i have already run it with torch 1.4.0

tunai commented 3 years ago

Hi! I have used @acoder-fin modified pre-trained model and also modified the dehaze22 script. The code runs well, but the output is not eliminating haze as I expected it to. It is doing something, but the results are definitely different from those in the paper (when using the same images, obtained from the .h5 files provided).

For example: [input] 2 [output] 2

Any ideas? Maybe I should keep going and train the model for this new dataset? Could someone use this image and report the output, just so I can check if I am implementing the method correctly?

Thank you!

sudoboi commented 3 years ago

@tunai what torch and vision versions have you used?

lujain197 commented 1 year ago

@acoder-fin your link to the updated model seems broken. Can you re-upload it, please?

Hi, please check the link above. I have upgraded it. Good luck!

please could you upload it again, the problem is still there