Open K-M-Ibrahim-Khalilullah opened 3 years ago
Thanks for the latest repo. Sometimes it produce this error:
UserWarning: The {}th input requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex. 'The {}th input requires gradient and ' check_gradient_dpooling: True Traceback (most recent call last): File "testcuda.py", line 265, in check_gradient_dconv() File "testcuda.py", line 97, in check_gradient_dconv eps=1e-3, atol=1e-4, rtol=1e-2)) File "\lib\site-packages\torch\autograd\gradcheck.py", line 390, in gradcheck checkIfNumericalAnalyticAreClose(a, n, j) File "lib\site-packages\torch\autograd\gradcheck.py", line 372, in checkIfNumericalAnalyticAreClose 'numerical:%s\nanalytical:%s\n' % (i, j, n, a)) File "\lib\site-packages\torch\autograd\gradcheck.py", line 289, in fail_test raise RuntimeError(msg) RuntimeError: Jacobian mismatch for output 0 with respect to input 1, numerical:tensor([[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0001, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], ..., [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], device='cuda:0') analytical:tensor([[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0001, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], ..., [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000], [0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]], device='cuda:0')
Is there any solution?
I am having the same issue. It randomly complains on both cpu and gpu.
@AI-ML-Enthusiast have you solved the problem?
Thanks for the latest repo. Sometimes it produce this error:
UserWarning: The {}th input requires gradient and is not a double precision floating point or complex. This check will likely fail if all the inputs are not of double precision floating point or complex. 'The {}th input requires gradient and ' check_gradient_dpooling: True Traceback (most recent call last): File "testcuda.py", line 265, in
check_gradient_dconv()
File "testcuda.py", line 97, in check_gradient_dconv
eps=1e-3, atol=1e-4, rtol=1e-2))
File "\lib\site-packages\torch\autograd\gradcheck.py", line 390, in gradcheck
checkIfNumericalAnalyticAreClose(a, n, j)
File "lib\site-packages\torch\autograd\gradcheck.py", line 372, in checkIfNumericalAnalyticAreClose
'numerical:%s\nanalytical:%s\n' % (i, j, n, a))
File "\lib\site-packages\torch\autograd\gradcheck.py", line 289, in fail_test
raise RuntimeError(msg)
RuntimeError: Jacobian mismatch for output 0 with respect to input 1,
numerical:tensor([[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0001, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
device='cuda:0')
analytical:tensor([[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0001, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
...,
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000]],
device='cuda:0')
Is there any solution?