Open Light-- opened 3 years ago
$ python testcuda.py torch.Size([2, 64, 128, 128]) torch.Size([20, 32, 7, 7]) torch.Size([20, 32, 7, 7]) torch.Size([20, 32, 7, 7]) 0.971507, 1.943014 0.971507, 1.943014 Zero offset passed /tmp-data/user1/soft/conda/envs/monoflex/lib/python3.7/site-packages/torch/autograd/gradcheck.py:242: UserWarning: At least one of the inputs that requires gradient is not of double precision floating point. This check will likely fail if all the inputs are not of double precision floating point. 'At least one of the inputs that requires gradient ' check_gradient_dpooling: True Traceback (most recent call last): File "testcuda.py", line 265, in check_gradient_dconv() File "testcuda.py", line 97, in check_gradient_dconv eps=1e-2, atol=1e-4, rtol=1e-2)) File "/tmp-data/user1/soft/conda/envs/monoflex/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 289, in gradcheck 'numerical:%s\nanalytical:%s\n' % (i, j, n, a)) File "/tmp-data/user1/soft/conda/envs/monoflex/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 227, in fail_test raise RuntimeError(msg) RuntimeError: Jacobian mismatch for output 0 with respect to input 1, numerical:tensor([[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [ 0.0000e+00, -1.1563e-03, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ..., [ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 2.6822e-05]]) analytical:tensor([[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [ 0.0000e+00, -1.1562e-03, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ..., [ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 2.5265e-05]])
$ python testcuda.py torch.Size([2, 64, 128, 128]) torch.Size([20, 32, 7, 7]) torch.Size([20, 32, 7, 7]) torch.Size([20, 32, 7, 7]) 0.971507, 1.943014 0.971507, 1.943014 Zero offset passed /tmp-data/user1/soft/conda/envs/monoflex/lib/python3.7/site-packages/torch/autograd/gradcheck.py:242: UserWarning: At least one of the inputs that requires gradient is not of double precision floating point. This check will likely fail if all the inputs are not of double precision floating point. 'At least one of the inputs that requires gradient ' check_gradient_dpooling: True Traceback (most recent call last): File "testcuda.py", line 265, in
check_gradient_dconv()
File "testcuda.py", line 97, in check_gradient_dconv
eps=1e-2, atol=1e-4, rtol=1e-2))
File "/tmp-data/user1/soft/conda/envs/monoflex/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 289, in gradcheck
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "/tmp-data/user1/soft/conda/envs/monoflex/lib/python3.7/site-packages/torch/autograd/gradcheck.py", line 227, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 1,
numerical:tensor([[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, -1.1563e-03, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
...,
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 2.6822e-05]])
analytical:tensor([[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, -1.1562e-03, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
...,
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00],
[ 0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00,
0.0000e+00, 2.5265e-05]])