pytorch / captum

Model interpretability and understanding for PyTorch
https://captum.ai
BSD 3-Clause "New" or "Revised" License
4.7k stars 475 forks source link

Set gradients to zero for integrated gradients ? #1037

Open depshad opened 1 year ago

depshad commented 1 year ago

I am trying to calculate integrated gradients for a specific model and multiple inputs in a loop. Do we need to set the model parameter and input gradients to zero manually? What is the default gradient accumulation behavior in the integrated gradient implementation?

Thank You

NarineK commented 1 year ago

Hi @depshad, since we are using torch.autograd.grad there is no accumulation of gradients happening in the input.grads By default it is using: only_inputs=True which doesn't accumulate grads. I think you can zero out grads but I believe that it won't change the results. You can give a try.

The autograd description of gradient accumulation: https://pytorch.org/docs/1.11/generated/torch.autograd.grad.html#torch.autograd.grad