A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
Hi, I have a question regarding the implementation of integrated gradients. In line https://github.com/marcoancona/DeepExplain/blob/8d7f748e1d8eae7d57444c6e42119dadc47287e9/deepexplain/tensorflow/methods.py#L215, xs_mod (which appears to be the linear interpolation of the model baseline to the input value) is set to self.xs * alpha. However, this seems to assume that the model baseline is all 0's. For baselines that are not all 0, I believe that xs_mod should be baseline+alpha*(xs - baseline) instead - is that correct?
Hi, I have a question regarding the implementation of integrated gradients. In line https://github.com/marcoancona/DeepExplain/blob/8d7f748e1d8eae7d57444c6e42119dadc47287e9/deepexplain/tensorflow/methods.py#L215,
xs_mod
(which appears to be the linear interpolation of the model baseline to the input value) is set toself.xs * alpha
. However, this seems to assume that the model baseline is all 0's. For baselines that are not all 0, I believe thatxs_mod
should bebaseline+alpha*(xs - baseline)
instead - is that correct?( @AvantiShri )