Open mehak126 opened 5 years ago
Attention weights are 2D weights, so they form an image, which you can overlay on the original image to see which objects are the "center of attention" for the CNN. It is similar to class activation maps.
Got it, thanks! :)
Are attention maps the same as class activation maps in CNNs? I'm having a little trouble trying to understand how the attention weights are used to represent the maps. Could someone send some links to understand the same if possible?