Total Images : ['../artifacts_dataset/rain/IMG_1665.JPG']
Start testing...
0%| | 0/1 [00:02<?, ?it/s]
Traceback (most recent call last):
File "demo.py", line 120, in <module>
restored = net(degrad_patch)
File "/home/user/.pyenv/versions/miniconda3-3.8-23.9.0-0/envs/promptir/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "demo.py", line 56, in forward
return self.net(x)
File "/home/user/.pyenv/versions/miniconda3-3.8-23.9.0-0/envs/promptir/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/MSU/Research/PromptIR/net/model.py", line 326, in forward
out_enc_level1 = self.encoder_level1(inp_enc_level1)
File "/home/user/.pyenv/versions/miniconda3-3.8-23.9.0-0/envs/promptir/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/.pyenv/versions/miniconda3-3.8-23.9.0-0/envs/promptir/lib/python3.8/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/home/user/.pyenv/versions/miniconda3-3.8-23.9.0-0/envs/promptir/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/MSU/Research/PromptIR/net/model.py", line 194, in forward
x = x + self.ffn(self.norm2(x))
File "/home/user/.pyenv/versions/miniconda3-3.8-23.9.0-0/envs/promptir/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/MSU/Research/PromptIR/net/model.py", line 96, in forward
x1, x2 = self.dwconv(x).chunk(2, dim=1)
File "/home/user/.pyenv/versions/miniconda3-3.8-23.9.0-0/envs/promptir/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/.pyenv/versions/miniconda3-3.8-23.9.0-0/envs/promptir/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/user/.pyenv/versions/miniconda3-3.8-23.9.0-0/envs/promptir/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Expected canUse32BitIndexMath(input) && canUse32BitIndexMath(output) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
Could this be because my tensor sizes are too big? Would resizing my input images help? Is there a specific size you used for images when training and testing? I did not make any modifications to the code.
Thanks a lot for your interest in our work, I think this is happening due to large sizes, you can resize your images to 256x256 or 128x128 or alternatively using the --tile option in the demo.py file
Hi,
When I try to run the following command:
python demo.py --test_path './test/artifacts_dataset/rain/IMG_1665.JPG' --output_path './output/demo/'
I get the following error:
Could this be because my tensor sizes are too big? Would resizing my input images help? Is there a specific size you used for images when training and testing? I did not make any modifications to the code.