utkuozbulak / pytorch-cnn-visualizations

Pytorch implementation of convolutional neural network visualization techniques
MIT License
7.81k stars 1.49k forks source link

Minimizing negative of the original value and gradients #91

Closed TonyWZ closed 3 years ago

TonyWZ commented 3 years ago

Hi, thank you for this repo. I am still trying to fully understand guided backprop.

In the original paper, the context seems to be maximizing the signal to a neuron, and for guided backprop, the paper sets all negative gradients to zero.

In this implementation, since the maximization is done by minimizing the negative of the signal, wouldn't that lead to a sign change in the gradients? As a result shouldn't the positive gradients be set to zero?

I might be missing something. I would appreciate if you can shed some light on my question.

utkuozbulak commented 3 years ago

Hello,

Sorry for the late reply. No, the signature is correct. You can convince yourself by experimenting with maximization/minimization for an activation with the usage of adversarial techniques.

TonyWZ commented 3 years ago

Thanks for the reply. Just to clarify, are you referring to the sign by signature?

If it is not too much trouble, could you point out the flaw in my reasoning? I know empirically it works, but I am having trouble finding where I went wrong. Thank you.

utkuozbulak commented 3 years ago

Maximizing an objective is just minimizing the negative of the same objective.

I suggest you to derive what it means to add the gradient of output with respect to an input, to the input and see how positive grad and negative grad changes the behavior. You will get more comfortable with the concept of maximization/minimization.