marcoancona / DeepExplain

A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
https://arxiv.org/abs/1711.06104
MIT License
729 stars 133 forks source link

Question about Integrated Gradients Implementation #23

Closed jsu27 closed 6 years ago

jsu27 commented 6 years ago

Hi, I have a question regarding the implementation of integrated gradients. In line https://github.com/marcoancona/DeepExplain/blob/8d7f748e1d8eae7d57444c6e42119dadc47287e9/deepexplain/tensorflow/methods.py#L215, xs_mod (which appears to be the linear interpolation of the model baseline to the input value) is set to self.xs * alpha. However, this seems to assume that the model baseline is all 0's. For baselines that are not all 0, I believe that xs_mod should be baseline+alpha*(xs - baseline) instead - is that correct?

( @AvantiShri )

marcoancona commented 6 years ago

Totally right, thanks for pointing out! Tests were failing to capture this issue. I will fix it asap.