stiv2@gaidar:~/liif$ python3 demo.py --input jap.png --model ./rdn-liif.pth --resolution 450,600 --output output.png --gpu 0
Traceback (most recent call last):
File "demo.py", line 34, in <module>
coord.unsqueeze(0), cell.unsqueeze(0), bsize=30000)[0]
File "/home/stiv2/liif/test.py", line 18, in batched_predict
model.gen_feat(inp)
File "/home/stiv2/liif/models/liif.py", line 34, in gen_feat
self.feat = self.encoder(inp)
File "/home/stiv2/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/stiv2/liif/models/rdn.py", line 99, in forward
f__1 = self.SFENet1(x)
File "/home/stiv2/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/stiv2/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 419, in forward
return self._conv_forward(input, self.weight)
File "/home/stiv2/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 416, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 45, 60] to have 3 channels, but got 4 channels instead
how can i fix this?