Open cnnmena opened 4 years ago
I use grad_seq= [torch.tensor(1.0).cuda(gpu) for _ in range(len(loss_seq))]
another problem:
loss_reg_yaw = reg_criterion(yaw_predicted, label_yaw_cont)
RuntimeError: reduce failed to synchronize: device-side assert trigered
Hi, I run the train_hopenet.py with my dataset ,pytorch version is 0.4.1, and got this error: File "XXX/train_hopenet.py", line 276, in
torch.autograd.backward(loss_seq, grad_seq)
File "XXX/torch/autograd/init.py",line 90, in backwardallow_unreachable=True) # allow_unreachable flag
RuntimeError: invalid gradient at index 0 - expected shape [] but got [1]
I also encountered this problem. How did you solve it
Hey! I got the same problem here with:
RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([1]) and output[0] has a shape of torch.Size([]).
I solved it with:
loss_seq = [torch.unsqueeze(loss_yaw,0), torch.unsqueeze(loss_pitch,0), torch.unsqueeze(loss_roll,0)]
at line 186
Hi, I run the train_hopenet.py with my dataset ,pytorch version is 0.4.1, and got this error: File "XXX/train_hopenet.py", line 276, in
torch.autograd.backward(loss_seq, grad_seq)
File "XXX/torch/autograd/init.py",line 90, in backwardallow_unreachable=True) # allow_unreachable flag
RuntimeError: invalid gradient at index 0 - expected shape [] but got [1]