-
Hi,
Congrats on your accepted work! I'd have some questions to understand the model architecture and performance better.
- What patch encoder did you use in the CLAM baseline? Is it based on Re…
-
Great work on this study and it may provide a lot of example for the later on researches. My question is that how do you pretrain the slide encoder (by the longnet), from the repository it seems there…
-
Dear authors,
First thank you for your amazing works and contributions. This is one of my favorite papers to read and I have study this paper over and over again like a handbook. there many citatio…
-
It is unclear if the MPP of the extracted should be 0.5, similar to the way the models were trained.
It is reasonable that if the model was trained on MPP=0.5, it should be applied on the same magni…
-
Hi thanks for sharing this amazing work and congratulations on the superb paper.
Would it be possible to share the pre training code? Specifically the DINO model and fine-tuning the DINO pretraini…
-
Dear authors,
In the paper (and on HuggingFace), you mention that you used a dataset "composed of 75,832,905 [256×256] and 24,297,995 [512×512] histology images at 20× resolution". However, in the …
-
Is there any reason why you guys utilized DINO instead of DINOv2? Was performance worse when using DINOv2?