isyangshu / MambaMIL

[MICCAI 2024] Official Code for "MambaMIL: Enhancing Long Sequence Modeling with Sequence Reordering in Computational Pathology"
57 stars 4 forks source link

The perplexing experimental results #22

Closed dmhdmhdmh closed 3 weeks ago

dmhdmhdmh commented 3 weeks ago

In the Cancer Subtyping experiment, the training results using features extracted by PLIP seem even worse than those using features extracted by Resnet50. Moreover, I observed that simply using Max-Pooling appears to outperform most of the comparative methods. I want to ask if you truly conducted your experiments with the necessary rigor and seriousness?

屏幕截图 2024-08-28 233826
isyangshu commented 3 weeks ago

In [1][2][3], for several methods, they achieve similar performance by utilizing the features obtained from ResNet-50 and the features obtained from PLIP. [1] Feature Re-Embedding: Towards Foundation Model-Level Performance in Computational Pathology [2] Towards A Generalizable Pathology Foundation Model via Unified Knowledge Distillation [3] A Multimodal Knowledge-enhanced Whole-slide Pathology Foundation Model

For the ResNet-50, Max-Pooling only outperforms TransMIL in BRACS-7* and BRACS-7, and achieves the lowest performance in the Mean results. In the NSCLS-2, Max-Pooling outperforms most of the comparative methods, and Mamba2MIL (recent work: https://www.arxiv.org/pdf/2408.15032) gets the similar conclusions, where the Max-Pooling gets the third highest performance in the NSCLS dataset.

I hope that you can directly repeat my experiments by using provided codes before criticizing the rigor and accuracy of my experiments in the paper.