Closed AvantiShri closed 4 years ago
Thanks for taking care!
@enryH Mind this might collide with the other MR, not sure which one to merge first. (Seems like this has the shorter fix?)
Other pull-request for reference: https://github.com/albermax/innvestigate/pull/181
Hello,
@jeremy-wendt alerted me to some slowdowns he was experiencing when running DeepLIFT through the iNNvestigate package. I did some innvestigation (please forgive the terrible pun) and I think I've tracked down the source of the issue; the DeepLIFT function was being recompiled every time analyze() was called. This is because analyze() was checking whether the analyzer had the attribute
_deep_lift_func
before doing the compilation, when in fact the compiled deeplift function was being stored under the attribute_func
. Replacing_deep_lift_func
with_func
fixes this.While I was there I also made a couple of other small fixes - first, I noticed that when the neuron selection mode was "index", the batch size was set to be the same as the length of the user-provided
X
. However, this can sometimes be very large and can cause out-of-memory errors; I added an argument to the constructor to allow the user to specify a batch size (defaults to 32).Second, I noticed that progress_update was set to the very large 1000000 presumably because you didn't want DeepLIFT printing out progress updates; the way to achieve that is just to set progress_update to "None"; that way, you won't get the annoying "Done 0" message printed at the beginning.
I've verified the functionality in this github gist: https://colab.research.google.com/gist/AvantiShri/9795532a1f212887c7a5c2de92e1fd4a/jeremy_innvestigate_deeplift.ipynb
Let me know if there are any issues and thanks for including DeepLIFT in this package!
-Av (creator of DeepLIFT)