Zj-BinXia / AMSA

This project is the official implementation of 'Coarse-to-Fine Embedded PatchMatch and Multi-Scale Dynamic Aggregation for Reference-based Super-Resolution', AAAI2022
70 stars 6 forks source link

The Batch dimension in the final output has changed? #4

Closed Yi-Yang355 closed 2 years ago

Yi-Yang355 commented 2 years ago

Thank you for your contribution. But I would like to ask why the dimension of input lr is (9,3,40,40), but the dimension of the final output turns into (3,3,160,160), and why the Batch dimension is changed?

Yi-Yang355 commented 2 years ago

/home/amax/anaconda3/envs/yyi_dev/lib/python3.8/site-packages/torch/nn/functional.py:3060: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. warnings.warn("Default upsampling behavior when mode={} is changed " /home/amax/yyi/projects/AMSA-master/AMSA/mmsr/models/losses.py:19: UserWarning: Using a target size (torch.Size([9, 3, 160, 160])) that is different to the input size (torch.Size([3, 3, 160, 160])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.l1_loss(pred, target, reduction='none') Traceback (most recent call last): File "mmsr/train.py", line 189, in main() File "mmsr/train.py", line 154, in main model.optimize_parameters(current_iter) File "/home/amax/yyi/projects/AMSA-master/AMSA/mmsr/models/ref_restoration_model.py", line 277, in optimize_parameters l_g_pix = self.cri_pix(self.output, self.gt) File "/home/amax/anaconda3/envs/yyi_dev/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, *kwargs) File "/home/amax/yyi/projects/AMSA-master/AMSA/mmsr/models/losses.py", line 57, in forward return self.loss_weight l1_loss( File "/home/amax/yyi/projects/AMSA-master/AMSA/mmsr/models/loss_utils.py", line 92, in wrapper loss = loss_func(pred, target, **kwargs) File "/home/amax/yyi/projects/AMSA-master/AMSA/mmsr/models/losses.py", line 19, in l1_loss return F.l1_loss(pred, target, reduction='none') File "/home/amax/anaconda3/envs/yyi_dev/lib/python3.8/site-packages/torch/nn/functional.py", line 2633, in l1_loss expanded_input, expanded_target = torch.broadcast_tensors(input, target) File "/home/amax/anaconda3/envs/yyi_dev/lib/python3.8/site-packages/torch/functional.py", line 71, in broadcast_tensors return _VF.broadcast_tensors(tensors) # type: ignore RuntimeError: The size of tensor a (3) must match the size of tensor b (9) at non-singleton dimension 0

Zj-BinXia commented 2 years ago

We have fixed it.