natanielruiz / deep-head-pose

:fire::fire: Deep Learning Head Pose Estimation using PyTorch.
Other
1.58k stars 367 forks source link

RuntimeError: Mismatch in shape #124

Open Algabri opened 1 year ago

Algabri commented 1 year ago

I am trying to run train_hopenet.py

python3 train_hopenet.py --dataset AFLW2000 --data_dir datasets/AFLW2000 --filename_list datasets/AFLW2000/files.txt --output_string er

I got this error:

Loading data.

/home/redhwan/.local/lib/python3.8/site-packages/torch/optim/adam.py:90: UserWarning: optimizer contains a parameter group with duplicate parameters; in future, this will cause an error; see github.com/pytorch/pytorch/issues/40967 for more information
  super(Adam, self).__init__(params, defaults)
Ready to train network.
Traceback (most recent call last):
  File "train_hopenet.py", line 193, in <module>
    torch.autograd.backward(loss_seq, grad_seq)
  File "/home/redhwan/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 166, in backward
    grad_tensors_ = _make_grads(tensors, grad_tensors_, is_grads_batched=False)
  File "/home/redhwan/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 50, in _make_grads
    raise RuntimeError("Mismatch in shape: grad_output["
RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([1]) and output[0] has a shape of torch.Size([]).

How can I solve it?

Note: torch.version = 1.12.0+cu102

Algabri commented 1 year ago

I changed this line:

grad_seq = [torch.ones(1).cuda(gpu) for _ in range(len(loss_seq))]

To be:

grad_seq = [torch.tensor(1, dtype=torch.float).cuda(gpu) for _ in range(len(loss_seq))]

It is working fine now.