Closed bhokaal2k closed 6 years ago
Regarding the first question, In module, we have mod.get_input_grads()
https://github.com/dmlc/mxnet/blob/master/python/mxnet/module/module.py#L595-L614 .
Thanks @sxjscience I just got my familiar with the Module api and it seems like it can take care of the first problem. What about the Guided ReLU? @piiswrong Is there a way to change the ReLU activations of a pre-trained model to GuidedReLU and save the new model to be used for network visualization?
Close it now because there is no discussion for a long time. @bhokaal2k Feel free to share your implementation of Guided BackPropagation.
@bhokaal2k can you share your implementation of Guided BackPropagation, I have met the same problem, and can't find a proper solution.
I am trying to develop a minimalist, simple to use and effective visualization program for visualizing what the deep network learns on the lines of the work presented in the paper https://arxiv.org/pdf/1412.6806.pdf and named as "Guided Backpropagation". I am aiming to make a notebook example and submit a PR for the same. These are the rough steps I need -
The issue I have now is first how to compute the gradient of one neuron w.r.t. the image, I can for example easily do it in Theano/Keras/Tensorflow by picking that one neuron and computing the grad w.r.t. the image. Is it also possible in MxNet Feedforward/Module class, I know that I can do it in MxNet with hand-defined symbols and binding, but does the same applies to the Feedforward/Module as well? Secondly, is it possible to have the normal ReLU during forward propagation and Guided ReLU during backpropagation for the same image? Remember I will be using a network that is trained using ReLU and it is only during the visualization that I would need the "Guided ReLU" that too only during back-propagation and not during forward propagation
Possible solution -