Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
No matter which input image or layer I choose, the gradients are always nonnegative.
Is there any reason it should be like that?
I know that some methods (e.g. gradcam++) truncate negative gradients, but for the original gradcam this is not the case (and it's not related to the ReLU on the final heat map).
Could it be somehow related to this issue?
Thanks for this great repo!
There's something that I can't figure out, though: When using a gradient-based method, I always get nonnegative gradients.
Specifically, I put a breakpoint in line 95 of https://github.com/jacobgil/pytorch-grad-cam/blob/master/pytorch_grad_cam/base_cam.py. Then I employed the gradcam method, and examined this variable: self.activations_and_gradients.gradients
No matter which input image or layer I choose, the gradients are always nonnegative. Is there any reason it should be like that?
I know that some methods (e.g. gradcam++) truncate negative gradients, but for the original gradcam this is not the case (and it's not related to the ReLU on the final heat map). Could it be somehow related to this issue?
Many thanks, Boaz