keisen / tf-keras-vis

Neural network visualization toolkit for tf.keras
https://keisen.github.io/tf-keras-vis-docs/
MIT License
311 stars 45 forks source link

Visualizing Effects from Previous Layers #67

Closed xBorja042 closed 3 years ago

xBorja042 commented 3 years ago

Hello. I have already starred your package. It seems very usefull and accurate. I have a question, is it possible to select which layer of the model we want to visualize these explainability effects? Instead of just using the last one. This would be helpfull to understand what is learning each layer, e.g. layers with a lower number of filters tend to learn bigger features an viceversa. Does this make sense for you?

Thanks a lot and kind regards. Borja

keisen commented 3 years ago

Thank you to star the project!

I have a question, is it possible to select which layer of the model we want to visualize these explainability effects? Instead of just using the last one.

I assume that you want to know how to visualize other than the last convolutional layer with Gradcam , Gradcam++ or Scorecam. (If I misunderstand, please point it out.) To do so, you can use penultimate_layer option of Gradcam#__call__() below.

https://github.com/keisen/tf-keras-vis/blob/c493e4cdc6a3e9726c5d2eee68cf72c5d316108a/tf_keras_vis/gradcam.py#L29

If you specify the name or index of the layer you want to visualize, the CAM corresponding to the layer will be generated. Please see the API document below for details.

Thanks!

xBorja042 commented 3 years ago

Hello @keisen!

Yes, you understood me perfectly. And I have checked that your solution works well. Is it possible to do this in Saliency? If not, why is that so?

Thanks a lot and kind regards,

Borja

keisen commented 3 years ago

Is it possible to do this in Saliency? If not, why is that so?

Although both methods can locate the region of the arbitrary object in input image, the ways are different.

To visualize gradcam needs the output values of and the gradient with respect to the layer. On the other hand, to visualize saliency map need the gradient with respect to model input. That's, Saliency does NOT need the information about the intermediate layer.

Thanks!

xBorja042 commented 3 years ago

Thanks a lot! @keisen I close this issue!