I have a question regarding the retention of input sample gradients when using the explainability methods provided by Captum. Specifically, I would like to know if it's possible to retain the gradients of input tensors after running Captum's explainability methods.
Here are the details of my use case:
I am using PyTorch's retain_grad() method or setting requires_grad=True on input tensors to retain their gradients.
My goal is to understand whether the explainability map generated by Captum will still maintain and manipulate the gradients of the input tensors, or if these gradients are removed/not retained in any way during the computation.
Could you provide some insight into how Captum handles gradients of input samples, and whether it is possible to ensure the gradients are retained through the explainability process? I would like to use gradients inside backpropagation to update weights during networks trainings.
Hello Captum team,
I have a question regarding the retention of input sample gradients when using the explainability methods provided by Captum. Specifically, I would like to know if it's possible to retain the gradients of input tensors after running Captum's explainability methods.
Here are the details of my use case:
I am using PyTorch's retain_grad() method or setting requires_grad=True on input tensors to retain their gradients.
My goal is to understand whether the explainability map generated by Captum will still maintain and manipulate the gradients of the input tensors, or if these gradients are removed/not retained in any way during the computation.
Could you provide some insight into how Captum handles gradients of input samples, and whether it is possible to ensure the gradients are retained through the explainability process? I would like to use gradients inside backpropagation to update weights during networks trainings.
Thank you for your support! Best regards