ralphc1212 / bayes-mil

Code of "Bayes-MIL: A New Probabilistic Perspective on Attention-based Multiple Instance Learning for Whole Slide Images" - ICLR 2023
GNU General Public License v3.0
14 stars 1 forks source link

Calculating patch-level metrics such as FROC #4

Closed DeVriesMatt closed 11 months ago

DeVriesMatt commented 1 year ago

Hi,

Great repo and paper - I really enjoyed reading it!

You have calculated some patch-level localisation metrics in the paper, and I would like to do the same on an algorithm I am working on. Can you provide insight into how you calculated those? Specifically, the patch-level FROC and precision.

Thank you!

ralphc1212 commented 11 months ago

Hi,

Thanks for paying attention to our work.

Simply speaking, we treat the patch-level localization problem as a binary classification problem and compare the prediction labels with the patch-level groundtruths. In our paper, the groundtruths are provided by the patch-level labels in CAMELYON16 and CAMELYON17. The results are averaged over all patches in a slide and all slides in a dataset.

Notably, the calculation of patch level FROC follows <Localization Results, Section 4.1 Results on Camelyon16> in DSMIL. Specifically, "The reported FROC score is defined as the average sensitivity at 6 predefined false positive rates: 1/4, 1/2, 1, 2, 4, and 8 FPs per WSI."

Precision is computed by definition average_over_all_slides(true_positive_patches_in_a_slide / ( true_positive_patches_in_a_slide + false_positive_patches_in_a_slide)).

Hope the information can help you. We will update the code for these metrics our repo later. Thanks!

ralphc1212 commented 11 months ago

Hi, please refer to eval_froc.py for the patch-level evaluation metrics.