CharlesShang / DCNv2

Deformable Convolutional Networks v2 with Pytorch
BSD 3-Clause "New" or "Revised" License
1.28k stars 401 forks source link

RuntimeError: Backward is not reentrant #27

Open zhengxinvip opened 5 years ago

zhengxinvip commented 5 years ago

(pytorch) wuwenfu@wuwenfu:~/DCNv2-master$ python test.py torch.Size([2, 64, 128, 128]) torch.Size([20, 32, 7, 7]) torch.Size([20, 32, 7, 7]) torch.Size([20, 32, 7, 7]) 0.971507, 1.943014 0.971507, 1.943014 Zero offset passed /home/wuwenfu/.conda/envs/pytorch/lib/python3.7/site-packages/torch/autograd/gradcheck.py:239: UserWarning: At least one of the inputs that requires gradient is not of double precision floating point. This check will likely fail if all the inputs are not of double precision floating point. 'At least one of the inputs that requires gradient ' check_gradient_dpooling: True Traceback (most recent call last): File "test.py", line 265, in check_gradient_dconv() File "test.py", line 97, in check_gradient_dconv eps=1e-3, atol=1e-4, rtol=1e-2)) File "/home/wuwenfu/.conda/envs/pytorch/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 289, in gradcheck return fail_test('Backward is not reentrant, i.e., running backward with same ' File "/home/wuwenfu/.conda/envs/pytorch/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 224, in fail_test raise RuntimeError(msg) RuntimeError: Backward is not reentrant, i.e., running backward with same input and grad_output multiple times gives different values, although analytical gradient matches numerical gradient how can I fix it? thanks.

mumianyuxin commented 4 years ago

same error, anyone know how to fix it?