sebastian-lapuschkin / lrp_toolbox

The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Python. The Toolbox realizes LRP functionality for the Caffe Deep Learning Framework as an extension of Caffe source code published in 10/2015.
Other
327 stars 90 forks source link

Backward function definition #14

Closed vedhas closed 5 years ago

vedhas commented 6 years ago
sebastian-lapuschkin commented 6 years ago

There might be a bit of misunderstanding here.


the purpose of backward() is to backpropagate the error gradient *during training*.Tanh.backwardimplements dy/dx (which is y = tanh(x)) times the top gradientDY`given as inputs from upper layers, to pass on the prediction error to all lower layers. FYI.


whatever function is not explicitly defined in any of the classes in package modules (which inherits from modules.Module) can be found in modules.Module


forward is in a sense not irrelevant, since it determines the class output and populates self.X. You might have a look at branch python-wip of that git, which provides some optimized variants for computing lrp (up to 87% more efficient), such as a less computationally and memory expensive reordering of operations and a "lrp-aware" forward pass, which pays for itself once you compute LRP for the same batch more than once.

Note that the python-wip branch requires python3 to run. The numpy-based code is functionally equivalent to the master branch and there is a very WIP version of an mxnet-based implementation with GPU support (of which the development has been staggering due to the keras software). We might pick up on that later tough.

Maybe @maxkohlbrenner (main dev of the mxnet stuff) would like to comment on that.

sebastian-lapuschkin commented 6 years ago

FYI:

I am glad to inform you that the public alpha of our new analysis toolbox is now online. Based on keras, the new implementation is now at least 10 times more efficient (on CPU) than our previous Caffe equivalent, up to a measured 520-fold speedup on GPU!

https://github.com/albermax/innvestigate