Closed xBorja042 closed 3 years ago
Thank you to star the project!
I have a question, is it possible to select which layer of the model we want to visualize these explainability effects? Instead of just using the last one.
I assume that you want to know how to visualize other than the last convolutional layer with Gradcam , Gradcam++ or Scorecam. (If I misunderstand, please point it out.)
To do so, you can use penultimate_layer
option of Gradcam#__call__()
below.
If you specify the name or index of the layer you want to visualize, the CAM corresponding to the layer will be generated. Please see the API document below for details.
Thanks!
Hello @keisen!
Yes, you understood me perfectly. And I have checked that your solution works well. Is it possible to do this in Saliency? If not, why is that so?
Thanks a lot and kind regards,
Borja
Is it possible to do this in Saliency? If not, why is that so?
Although both methods can locate the region of the arbitrary object in input image, the ways are different.
To visualize gradcam needs the output values of and the gradient with respect to the layer. On the other hand, to visualize saliency map need the gradient with respect to model input. That's, Saliency does NOT need the information about the intermediate layer.
Thanks!
Thanks a lot! @keisen I close this issue!
Hello. I have already starred your package. It seems very usefull and accurate. I have a question, is it possible to select which layer of the model we want to visualize these explainability effects? Instead of just using the last one. This would be helpfull to understand what is learning each layer, e.g. layers with a lower number of filters tend to learn bigger features an viceversa. Does this make sense for you?
Thanks a lot and kind regards. Borja