Closed minhmanho closed 5 months ago
If it's [n_classes, N] like the case of CLAM_MB, the branch that corresponds to the predicted class is visualized by default: https://github.com/mahmoodlab/CLAM/blob/206bf2dfddd5a297513087358302c8d9b2233192/vis_utils/heatmap_utils.py#L51
That's very helpful. Thanks.
Hi @minhmanho @fedshyvana , thanks for the discussion here which is really helpful, I wander what does the attention score stand for in CLAM_SB. Since the attention score is in [1, N], does it always represent the score for the first class?
In this case, it is for the predicted class, not always for the first class. It could be construed as which patches (among N patches) are useful for classification in general.
Hi,
Thank you for this incredible work. Does the heatmap visualization code always visualize the heatmap of the first class? Concretely, after computing the attention scores A with the shape of (n_classes, N):
https://github.com/mahmoodlab/CLAM/blob/206bf2dfddd5a297513087358302c8d9b2233192/models/model_clam.py#L207
this tensor is then flattened:
https://github.com/mahmoodlab/CLAM/blob/206bf2dfddd5a297513087358302c8d9b2233192/wsi_core/WholeSlideImage.py#L528
and assigned to the tile coords:
https://github.com/mahmoodlab/CLAM/blob/206bf2dfddd5a297513087358302c8d9b2233192/wsi_core/WholeSlideImage.py#L578
Therefore, it always utilizes the top len(coords) of flattened scores for the heatmaps. Should I call A[1, :] to visualize attention scores of the class 1 instead of flattening?
Man