mahmoodlab / CLAM

Data-efficient and weakly supervised computational pathology on whole slide images - Nature Biomedical Engineering
http://clam.mahmoodlab.org
GNU General Public License v3.0
1.07k stars 350 forks source link

Does the heatmap visualization code always visualize the heatmap of the first class? #239

Closed minhmanho closed 5 months ago

minhmanho commented 5 months ago

Hi,

Thank you for this incredible work. Does the heatmap visualization code always visualize the heatmap of the first class? Concretely, after computing the attention scores A with the shape of (n_classes, N):

https://github.com/mahmoodlab/CLAM/blob/206bf2dfddd5a297513087358302c8d9b2233192/models/model_clam.py#L207

this tensor is then flattened:

https://github.com/mahmoodlab/CLAM/blob/206bf2dfddd5a297513087358302c8d9b2233192/wsi_core/WholeSlideImage.py#L528

and assigned to the tile coords:

https://github.com/mahmoodlab/CLAM/blob/206bf2dfddd5a297513087358302c8d9b2233192/wsi_core/WholeSlideImage.py#L578

Therefore, it always utilizes the top len(coords) of flattened scores for the heatmaps. Should I call A[1, :] to visualize attention scores of the class 1 instead of flattening?

Man

fedshyvana commented 5 months ago

If it's [n_classes, N] like the case of CLAM_MB, the branch that corresponds to the predicted class is visualized by default: https://github.com/mahmoodlab/CLAM/blob/206bf2dfddd5a297513087358302c8d9b2233192/vis_utils/heatmap_utils.py#L51

minhmanho commented 5 months ago

That's very helpful. Thanks.

thomascong121 commented 2 months ago

Hi @minhmanho @fedshyvana , thanks for the discussion here which is really helpful, I wander what does the attention score stand for in CLAM_SB. Since the attention score is in [1, N], does it always represent the score for the first class?

minhmanho commented 2 months ago

In this case, it is for the predicted class, not always for the first class. It could be construed as which patches (among N patches) are useful for classification in general.