utkuozbulak / pytorch-cnn-visualizations

Pytorch implementation of convolutional neural network visualization techniques
MIT License
7.81k stars 1.49k forks source link

Adding capability to choose device other than cpu and fixing/generalize #96

Open arnabdas2019ovgu opened 3 years ago

arnabdas2019ovgu commented 3 years ago

Reference Issues/PRs

None

What does this implementation fix?

These changes mainly focus on augmenting the code to make it more usable and generalize.

  1. Run on GPU: Modern architectures are so deep and computation extensive, that running it only on CPU may always result in out of memory error. In this implementation, I have changed grad-cam, guided-backprop, integrated-gradient, layer-activation-with-guided-backprop, score-cam, and vanilla-backprop to be able to initialize with desired device ids as per our will. And then the model forward function and backward gradient calculations can be done on GPU devices. If nothing is provided at the time of initialization, the code simply follows the existing.

  2. Introduction of forward hook: When we make models in Pytorch it is not essential that model class and the forward function contain the same layers. For example, often designers don't keep Flatten and Concat functions out of the class, And only perform them in forward section. So in Cam extractors, forward_pass_on_convolutions function looping over the layer of the model until the desired layer is reached may not depict the correct functionality of the forward function of the model. Hence we just added forward hook of the desired conv layer and let the model complete its forward pass without intervention. This way we can get the conv layer output as well as the correct output of the entire model in way more general way without the dependency of the forward function.

Other comments

No

utkuozbulak commented 3 years ago

Hello, sorry for the late reply. The reason I did not introduce the device to run the code on was to not have any code other than the ones necessary to minimize confusion about the techniques. Hence, I'm unwilling to incorporate your PR. I will however, keep it open so that people can refer to it if anyone wants to run it on gpu.