Open depshad opened 1 year ago
Hi @depshad, since we are using torch.autograd.grad there is no accumulation of gradients happening in the input.grads
By default it is using: only_inputs=True
which doesn't accumulate grads. I think you can zero out grads but I believe that it won't change the results. You can give a try.
The autograd description of gradient accumulation: https://pytorch.org/docs/1.11/generated/torch.autograd.grad.html#torch.autograd.grad
I am trying to calculate integrated gradients for a specific model and multiple inputs in a loop. Do we need to set the model parameter and input gradients to zero manually? What is the default gradient accumulation behavior in the integrated gradient implementation?
Thank You