Closed teresaconc closed 4 years ago
If you look at the code for visualize_cam you can tell it is calling visualize_cam_with_losses to return the normalized gradients for one class, this normalization may show a larger CAM for other classes may seem "stronger" than the correct one.
Thank you for your interesting question and your intuitively answering. (Strictly speaking, even if the gradients was NOT normalized, the CAM does NOT indicate value that is relating predicted score of model.)
For now, we'll close this issue, but please feel free to reopen this issue whenever you need. Thanks!
This is probably more of a theoretically question than a practical issue. Is it expectable that, when using grad-cam, the class producing the strongest heatmap is the predicted class?
I am using an inception-V3 network fine-tuned for a 5-class classification problem. The class activation maps produced by visualize_cam seem reasonable for most of the cases. However, I have noted that sometimes my predicted class does not present a particular strong heatmap. For instance, I have an image with predicted score 0.78 for class 2. But activation maps produced for classes 0 and 1 are much "stronger" than the one from the class 2.
Is this an expected behavior? Or I might have a bug in my implementation? I was under the impression that the strongest class activation map should be the one from the predicted class, but not sure if this is inherently true for all the cases.