keisen / tf-keras-vis

Neural network visualization toolkit for tf.keras
https://keisen.github.io/tf-keras-vis-docs/
MIT License
311 stars 45 forks source link

Guided Backpropagation #49

Closed estanley16 closed 3 years ago

estanley16 commented 3 years ago

Hi @keisen, thanks for putting this together. Is there a way to generate a guided backpropagation saliency map with a custom function in gradient-modifier?

keisen commented 3 years ago

@estanley16 , Although tf-keras-vis NOT provide the guided backpropagation, we will add it as new function if you can wait for it. But if you need it immediately, there is a way that you can implement the function that is passed to Saliency's constructor as model_modifier argument.

Thanks!

eyaler commented 3 years ago

maybe something like:

@tf.custom_gradient
def guidedRelu(x):
    def grad(dy):
        return tf.cast(dy>0,"float32") * tf.cast(x>0, "float32") * dy
    return tf.nn.relu(x), grad

def modify_model(model):
  for layer in [layer for layer in model.layers[1:] if hasattr(layer,"activation")]:
      if layer.activation == tf.keras.activations.relu:
          layer.activation = guidedRelu
  return model
ISipi commented 3 years ago

I'd personally be against using Guided Backpropagation since papers like Adebayo et al. 2018 (Sanity Checks for Saliency Maps) and Srinivas and Fleuret 2021 (Rethinking the Role of Gradient-Based Attribution Methods for Model Interpretability) can both be taken as arguments against using saliency methods based on input-gradients.

eyaler commented 3 years ago

i have found guided backprop quite useful in real world applications. it is also a part of guided-gradcam suggested in the gradcam paper. as long as we have vanilla gradients, guided backprop is much more deserving to the very least as a baseline canonical method.

ISipi commented 3 years ago

i have found guided backprop quite useful in real world applications. it is also a part of guided-gradcam suggested in the gradcam paper. as long as we have vanilla gradients, guided backprop is much more deserving to the very least as a baseline canonical method.

May I suggest you go check this page: https://glassboxmedicine.com/2019/10/12/guided-grad-cam-is-broken-sanity-checks-for-saliency-maps/

keisen commented 3 years ago

@eyaler , Thank you for providing your code snippet! @ISipi , Thank you for introducing this great article! I agree with you.

At first, because of the problem of guided backprop, I've rejected a guided backprop module arbitrarily when tf-keras-vis derived from raghakot/keras-vis.

But, recently, I changed my mind.

Various methods have various pros and cons, it's hard to evaluate it correctly and decide which should be useful with anything, anytime. A method that is good for one user may NOT be useful for another one. They should be evaluated and decided by individual users. (That's, there may be users whom Guided backprop is useful for. Of course, may not.) So, I believe, tf-keras-vis should provide various functions as possible as we can, regardless whether they may be good methods or not.

We will implement a guided backprop module in v0.7.0 or higher. Thanks!

ISipi commented 3 years ago

@keisen Thanks for taking my argument into consideration. I understand that giving users the option to choose is important. However, may I suggest that you at least include a sort of "use at your own peril" warning label and a link to either the above article or the original arXiv paper on sanity checks for saliency maps?

keisen commented 3 years ago

@ISipi , Thank you for your great suggestion! I'm going to include the warning comment in the code of guide-backprop.

Thanks!

(To be honest, I want to publish API and GettingStarted documentation on a website. It's ideal if it included knowledge of visualization like above. If I could find the time, I would also do it. )