it is very nice that you offer a pre-trained model. However, it outputs a cropped version of the cloudy scene. When setting --crop_size to 256, I get the following error:
RuntimeError: Error(s) in loading state_dict for RDN_residual_CR:
size mismatch for RDBs.0.convs.1.attn_mask: copying a param with shape torch.Size([64, 64, 64]) from checkpoint, the shape in current model is torch.Size([256, 64, 64]).
Would it be possible to publish / send me a pre-trained model that does not work on cropped images of 128x128, but original 256x256 instead?
Hi,
it is very nice that you offer a pre-trained model. However, it outputs a cropped version of the cloudy scene. When setting --crop_size to 256, I get the following error:
RuntimeError: Error(s) in loading state_dict for RDN_residual_CR: size mismatch for RDBs.0.convs.1.attn_mask: copying a param with shape torch.Size([64, 64, 64]) from checkpoint, the shape in current model is torch.Size([256, 64, 64]).
Would it be possible to publish / send me a pre-trained model that does not work on cropped images of 128x128, but original 256x256 instead?