I was trying to finetune a x2 model with own dataset, got these errors of tensor size mismatch. Scale x4 works fine though.
Data prep included multi-scale images step. Opt file was modified from finetune_realesrgan_x4plus.yml with scale set to 4, data path updated and batch_size_per_gpu changed per gpu limit.
Log
2021-09-16 19:54:44,375 INFO: Start training from epoch: 0, iter: 0
C:\Users\xxxxx.conda\envs\torch\lib\site-packages\torch\nn\functional.py:3063: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
C:\Users\xxxxx.conda\envs\torch\lib\site-packages\torch\nn\functional.py:3103: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
warnings.warn("The default behavior for interpolate/upsample with float scale_factor changed "
C:\Users\xxxxx.conda\envs\torch\lib\site-packages\torch\nn\functional.py:3063: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
C:\Users\xxxxx.conda\envs\torch\lib\site-packages\basicsr\losses\losses.py:16: UserWarning: Using a target size (torch.Size([2, 3, 256, 256])) that is different to the input size (torch.Size([2, 3, 512, 512])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.l1_loss(pred, target, reduction='none')
Traceback (most recent call last):
File "realesrgan/train.py", line 11, in
train_pipeline(root_path)
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\basicsr\train.py", line 167, in train_pipeline
model.optimize_parameters(current_iter)
File "c:\users\xxxxx\real-esrgan\realesrgan\models\realesrgan_model.py", line 200, in optimize_parameters
l_g_pix = self.cri_pix(self.output, l1_gt)
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, *kwargs)
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\basicsr\losses\losses.py", line 55, in forward
return self.loss_weight l1_loss(pred, target, weight, reduction=self.reduction)
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\basicsr\losses\loss_util.py", line 91, in wrapper
loss = loss_func(pred, target, **kwargs)
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\basicsr\losses\losses.py", line 16, in l1_loss
return F.l1_loss(pred, target, reduction='none')
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\torch\nn\functional.py", line 2633, in l1_loss
expanded_input, expanded_target = torch.broadcast_tensors(input, target)
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\torch\functional.py", line 71, in broadcast_tensors
return _VF.broadcast_tensors(tensors) # type: ignore
RuntimeError: The size of tensor a (512) must match the size of tensor b (256) at non-singleton dimension 3
Hi, thanks for the great works!
I was trying to finetune a x2 model with own dataset, got these errors of tensor size mismatch. Scale x4 works fine though.
Data prep included multi-scale images step. Opt file was modified from finetune_realesrgan_x4plus.yml with scale set to 4, data path updated and batch_size_per_gpu changed per gpu limit.
Any suggestions on what might be missing?
Log 2021-09-16 19:54:44,375 INFO: Start training from epoch: 0, iter: 0 C:\Users\xxxxx.conda\envs\torch\lib\site-packages\torch\nn\functional.py:3063: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) C:\Users\xxxxx.conda\envs\torch\lib\site-packages\torch\nn\functional.py:3103: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. warnings.warn("The default behavior for interpolate/upsample with float scale_factor changed " C:\Users\xxxxx.conda\envs\torch\lib\site-packages\torch\nn\functional.py:3063: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) C:\Users\xxxxx.conda\envs\torch\lib\site-packages\basicsr\losses\losses.py:16: UserWarning: Using a target size (torch.Size([2, 3, 256, 256])) that is different to the input size (torch.Size([2, 3, 512, 512])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.l1_loss(pred, target, reduction='none') Traceback (most recent call last): File "realesrgan/train.py", line 11, in
train_pipeline(root_path)
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\basicsr\train.py", line 167, in train_pipeline
model.optimize_parameters(current_iter)
File "c:\users\xxxxx\real-esrgan\realesrgan\models\realesrgan_model.py", line 200, in optimize_parameters
l_g_pix = self.cri_pix(self.output, l1_gt)
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, *kwargs)
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\basicsr\losses\losses.py", line 55, in forward
return self.loss_weight l1_loss(pred, target, weight, reduction=self.reduction)
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\basicsr\losses\loss_util.py", line 91, in wrapper
loss = loss_func(pred, target, **kwargs)
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\basicsr\losses\losses.py", line 16, in l1_loss
return F.l1_loss(pred, target, reduction='none')
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\torch\nn\functional.py", line 2633, in l1_loss
expanded_input, expanded_target = torch.broadcast_tensors(input, target)
File "C:\Users\xxxxx.conda\envs\torch\lib\site-packages\torch\functional.py", line 71, in broadcast_tensors
return _VF.broadcast_tensors(tensors) # type: ignore
RuntimeError: The size of tensor a (512) must match the size of tensor b (256) at non-singleton dimension 3