artvandelay / Deep_Inside_Convolutional_Networks

This is a caffe implementation to visualize the learnt model
MIT License
61 stars 32 forks source link

Absolute value of gradients #4

Closed saiprabhakar closed 7 years ago

saiprabhakar commented 7 years ago

As described in Section 3.1 of the paper, you need to take the absolute value of gradients before normalizing it. If you are not doing it, then you are not visualizing those pixels which decrease the class score the most with a small change (this is also a important descriptor).

artvandelay commented 7 years ago

It's been a while since I have looked at this code. Though, I think after normalizing it is effectively the same operation, i.e. the saliency variable should look the same. Tell me if I'm wrong.

saiprabhakar commented 7 years ago

I dont think so, for example using gradient array = [-0.8, 0, 0.8], your normalization will make it as [0, 0.5 ,1.0], but if you take absolute value and then normalize it, it will become [1.0, 0, 1.0].

saiprabhakar commented 7 years ago

I am also able to get more dense saliency map using the tweek