HaozheQi / P2B

P2B: Point-to-Box Network for 3D Object Tracking in Point Clouds
189 stars 36 forks source link

[RuntimeError] train_tracking.py #15

Closed huzhu33 closed 4 years ago

huzhu33 commented 4 years ago

Hello, @HaozheQi ,thanks for your great job! I'm running your codes for my own interests, I successfully used the model you provided(netR_36.pth) to reproduce the results of the paper.Then I try to tranin the model by myself. But I ran into some problems when running your train_tracking.py .The problems are as follows:

/home/zhuhu/anaconda3/envs/P2B/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:122: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) ======>>>>> Online epoch: #0, lr=0.001000 <<<<<====== 0%| | 0/21 [00:02<?, ?it/s] Traceback (most recent call last): File "train_tracking.py", line 181, in loss.backward() File "/home/zhuhu/anaconda3/envs/P2B/lib/python3.6/site-packages/torch/tensor.py", line 195, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/zhuhu/anaconda3/envs/P2B/lib/python3.6/site-packages/torch/autograd/init.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: Expected isFloatingType(grads[i].type().scalarType()) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

I'm not sure how to solve this problem, hope to get your reply!

Minglin-Chen commented 4 years ago

I encountered the same problem using PyTorch==1.5. However, it works well in PyTorch==1.2.

huzhu33 commented 4 years ago

Thanks for your reply! Do you know is there any other solution besides changing the version of PyTorch?