natanielruiz / deep-head-pose

:fire::fire: Deep Learning Head Pose Estimation using PyTorch.
Other
1.58k stars 367 forks source link

torch.autograd.backward(loss_seq, grad_seq),I got RuntimeError #85

Open cnnmena opened 4 years ago

cnnmena commented 4 years ago

Hi, I run the train_hopenet.py with my dataset ,pytorch version is 0.4.1, and got this error: File "XXX/train_hopenet.py", line 276, in torch.autograd.backward(loss_seq, grad_seq) File "XXX/torch/autograd/init.py",line 90, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: invalid gradient at index 0 - expected shape [] but got [1]

cnnmena commented 4 years ago

I use grad_seq= [torch.tensor(1.0).cuda(gpu) for _ in range(len(loss_seq))]
another problem: loss_reg_yaw = reg_criterion(yaw_predicted, label_yaw_cont) RuntimeError: reduce failed to synchronize: device-side assert trigered

xgbm commented 4 years ago

Hi, I run the train_hopenet.py with my dataset ,pytorch version is 0.4.1, and got this error: File "XXX/train_hopenet.py", line 276, in torch.autograd.backward(loss_seq, grad_seq) File "XXX/torch/autograd/init.py",line 90, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: invalid gradient at index 0 - expected shape [] but got [1]

I also encountered this problem. How did you solve it

hchoHsu commented 2 years ago

Hey! I got the same problem here with: RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([1]) and output[0] has a shape of torch.Size([]).

I solved it with: loss_seq = [torch.unsqueeze(loss_yaw,0), torch.unsqueeze(loss_pitch,0), torch.unsqueeze(loss_roll,0)] at line 186