utkuozbulak / pytorch-cnn-visualizations

Pytorch implementation of convolutional neural network visualization techniques
MIT License
7.81k stars 1.49k forks source link

Why take np.maximum(cam, 0) in GradCam? #115

Closed mesllo closed 1 year ago

mesllo commented 1 year ago

I am trying to figure out why in line 86 of gradcam.py, negative gradients are being clipped.

image

Why is this line necessary to visualize the gradients, don't we also want to show negative gradients since they essentially imply that the greater the average influence of that channel then the greater the decrease in the loss? I may be overthinking things here, so I hope you could shed some light on this as I cannot wrap my head around it. Thanks!

utkuozbulak commented 1 year ago

Hello, if my memory serves me right, it is because the authors use a ReLU after taking weighted average, which is basically what line 86 does. You can have a look at the paper for operations.