DearCaat / MHIM-MIL

[ICCV 2023 Oral] Multiple Instance Learning Framework with Masked Hard Instance Mining for Whole Slide Image Classification
49 stars 3 forks source link

How to evaluation for testing datasets? #7

Closed YAOSL98 closed 5 months ago

DearCaat commented 5 months ago

For Camelyon-16 and TCGA-NSCLC, we all used multi-fold cross-validation. Therefore, we didn't use the official Camelyon-16 test set to evaluate the models.

YAOSL98 commented 5 months ago

For Camelyon-16 and TCGA-NSCLC, we all used multi-fold cross-validation. Therefore, we didn't use the official Camelyon-16 test set to evaluate the models.

  • Cross-validation code: cv-fold=3 for Camelyon-16, cv-fold=4 for TCGA-NSCLC. Complete Codes.
  • If u wanto evaluate for test set by yourself, u should train a model only with train set, and evaluate it with test set. This repo does not contain this codes. u can use the model api.

Thanks a lot :)