pytorch / captum

Model interpretability and understanding for PyTorch
https://captum.ai
BSD 3-Clause "New" or "Revised" License
4.7k stars 475 forks source link

Error while generating integrated_gradients for custom object detection model #926

Open akashlp27 opened 2 years ago

akashlp27 commented 2 years ago

Attached is the full logs,

C:\Users\user\PycharmProjects\yolor_my\yolor_env\Scripts\python.exe C:/Users/user/PycharmProjects/yolor_my/yolor/my_test.py
To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
Person 0.65 [tensor(0.21673, device='cuda:0', requires_grad=True), tensor(19.62591, device='cuda:0', requires_grad=True), tensor(41.16095, device='cuda:0', requires_grad=True), tensor(47.82255, device='cuda:0', requires_grad=True)]
Traceback (most recent call last):
  File "C:/Users/user/PycharmProjects/yolor_my/yolor/my_test.py", line 138, in <module>
    test_img(r"C:\Users\user\Downloads\t1.png")
  File "C:/Users/user/PycharmProjects/yolor_my/yolor/my_test.py", line 122, in test_img
    n_steps=200)  # integrated_gradients.attribute(torch_image, target=0, n_steps=200)#np.array(torch_image.detach().cpu())
  File "C:\Users\user\PycharmProjects\yolor_my\yolor_env\lib\site-packages\captum\log\__init__.py", line 35, in wrapper
    return func(*args, **kwargs)
  File "C:\Users\user\PycharmProjects\yolor_my\yolor_env\lib\site-packages\captum\attr\_core\integrated_gradients.py", line 291, in attribute
    method=method,
  File "C:\Users\user\PycharmProjects\yolor_my\yolor_env\lib\site-packages\captum\attr\_core\integrated_gradients.py", line 354, in _attribute
    additional_forward_args=input_additional_args,
  File "C:\Users\user\PycharmProjects\yolor_my\yolor_env\lib\site-packages\captum\_utils\gradient.py", line 121, in compute_gradients
    grads = torch.autograd.grad(torch.unbind(outputs), inputs)
  File "C:\Users\user\PycharmProjects\yolor_my\yolor_env\lib\site-packages\torch\autograd\__init__.py", line 204, in grad
    inputs, allow_unused)
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 1]], which is output 0 of SliceBackward, is at version 2; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Process finished with exit code 1

So I was trying to generate the gradients for a yolor model, I have encountered with this error, any leads would help a lot ....

NarineK commented 2 years ago

@akashlp27, it looks like there has been an issue while computing the gradients. The error suggests to enable torch.autograd.set_detect_anomaly(True) for further investigation. This might not be specifically Captum issue. There might be an issue computing the gradients w.r.t. inputs. Perhaps you can first try to use autograd or captum.attr.Saliency and see if you are able to compute the gradients at all.

akashlp27 commented 2 years ago

Hi, ya was able to use autograd with pytorch to generate the gradients, but having issues with generating the integrated gradients. But got same error with finding the Saliency.

NarineK commented 2 years ago

@akashlp27, Saliency uses autograd to compute the gradients. It should be the same as using autograd. If you can provide a colab notebook we can debug it.