pytorch / captum

Model interpretability and understanding for PyTorch
https://captum.ai
BSD 3-Clause "New" or "Revised" License
4.96k stars 499 forks source link

Computing contributions w.r.t. logits rather than final activations #91

Closed AvantiShri closed 5 years ago

AvantiShri commented 5 years ago

Often, in practice, we wish to compute the contributions w.r.t. the logits of the final sigmoid/softmax, rather than w.r.t. the final network output itself. This is to avoid artifacts that can be caused by the saturating nature of the sigmoid/softmax, and comes into play when comparing attributions between examples. It is particularly relevant if gradient*input is used as an attribution method, because for examples with very confident predictions, the sigmoid/softmax outputs tend to saturate and the gradients will approach zero. I'm wondering if it may be worth mentioning this in the documentation - in the current "getting started", the toy model has a sigmoid output:

Screenshot 2019-10-08 at 2 19 52 PM

I'm concerned that a naive user may try to compare the magnitudes of attributions across different examples without realizing that, for sigmoid/softmax outputs, it may be worth removing the final nonlinearity before doing such a comparison. We discuss this in Section 3.6 of the deeplift paper. Ideally there would be an option in Captum to ignore the final nonlinearity, but I realize it may not be trivial to add that option. Sorry if this is already addressed and I missed it.

NarineK commented 5 years ago

Hi Avanti, thank you so much for the feedback, that's a great point! Now, I remember that I read about it in your paper a while ago. Since this is a Toy model, I didn't pay attention to the last layer but it is a good point. I will update either the documentation or the examples in getting started

Currently, DeepLift's implementations doesn't support intermediate layers or neurons but we can easily do it with a forward hook. Due to time constraints we haven't added it in the release but we will make it available in the master branch soon.

Thank you so much! We really value your feedback 👍

NarineK commented 5 years ago

Updated the getting started

orionr commented 5 years ago

@NarineK, is this finished so we can close it or is there some additional feature request here. Thanks all!

gabrieltseng commented 5 years ago

I am happy to give this a shot!

NarineK commented 5 years ago

Thank you so much, @gabrieltseng ! Of course, go ahead!

One thing to note is that, I'm expanding DeepLIFT to support Layer and Neuron and there might be changes also in the current implementation of DeepLift because it has some issues with models that are wrapped with DataParallel, I'll submit the PR soon but you can start working on this feature.

NarineK commented 5 years ago

Merged with: #142 Closing ...