Closed yja1 closed 5 years ago
there is some limitation of train LR HR width and height ? at least big than ? pixel
For training, the image should be larger than the training patch size, i.e. 48x48.
Your error input and target shapes do not match: input [16 x 3 x 192 x 192], target [16 x 3 x 48 x 48]
indicates that your feed the incorrect HR image. In other words, your LR image is 48x48, with up scale factor of 4, your HR target should be 192x192, while the given one is 48x48.
Your are suggested to check the path of your HR image, and the shape of you HR images.
CUDA_VISIBLE_DEVICES=0 python train.py -opt options/train/train_GMFN.json I use celeba dataset train
===> Training Epoch: [1/1000]... Learning Rate: 0.000200 Epoch: [1/1000]: 0%| | 0/251718 [00:00<?, ?it/s] Traceback (most recent call last): File "train.py", line 131, in
main()
File "train.py", line 69, in main
iter_loss = solver.train_step()
File "/exp_sr/SRFBN/solvers/SRSolver.py", line 104, in train_step
loss_steps = [self.criterion_pix(sr, split_HR) for sr in outputs]
File "/exp_sr/SRFBN/solvers/SRSolver.py", line 104, in
loss_steps = [self.criterion_pix(sr, split_HR) for sr in outputs]
File "/toolscnn/env_pyt0.4_py3.5_awsrn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/toolscnn/env_pyt0.4_py3.5_awsrn/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 87, in forward
return F.l1_loss(input, target, reduction=self.reduction)
File "/toolscnn/env_pyt0.4_py3.5_awsrn/lib/python3.5/site-packages/torch/nn/functional.py", line 1702, in l1_loss
input, target, reduction)
File "/toolscnn/env_pyt0.4_py3.5_awsrn/lib/python3.5/site-packages/torch/nn/functional.py", line 1674, in _pointwise_loss
return lambd_optimized(input, target, reduction)
RuntimeError: input and target shapes do not match: input [16 x 3 x 192 x 192], target [16 x 3 x 48 x 48] at /pytorch/aten/src/THCUNN/generic/AbsCriterion.cu:12