A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling. (ICLR 2018)
Hello again,
reading more about attribution methods and rereading the code written in the methods.py file I stumbled on a line of code that I didn't understand, in the EpsilonLRP paper the grad is calculated with the equation: grad op_out / (op_in + eps)
While in the methods.py the equation used is: grad output / (input + eps tf.where(input >= 0, tf.ones_like(input), -1 tf.ones_like(input)))
Could you help me understand why epsilon is multiplied with that entity?
Hello again, reading more about attribution methods and rereading the code written in the methods.py file I stumbled on a line of code that I didn't understand, in the EpsilonLRP paper the grad is calculated with the equation: grad op_out / (op_in + eps) While in the methods.py the equation used is: grad output / (input + eps tf.where(input >= 0, tf.ones_like(input), -1 tf.ones_like(input)))
Could you help me understand why epsilon is multiplied with that entity?