Closed jafarinia closed 1 year ago
We have not evaluated nor claimed any patch-level AUC in the paper. The evaluation code for FROC is available in the Camelyon16 challenge repository and the way it is computed is thoroughly documented in the Camelyon16 paper. Please note that this metric is not similar to an IoU score or Dice score for segmentation with pixel-level granularity. It merely profiles the agreement in the localization between the annotated regions and detected regions.
Hi Can you guys or someone explain how the prediction for instances which leads to FROC is achieved? (It's non existent in the repo and totally not clear in the paper) Because I made my own patches and gave them their true labels and bag prediction is working fine but no matter if I use attentions or instance predictor. (which should be discarded during inference based on the paper text) I don't get good patch AUC prediction. (with attention almost always I get <0.5 AUC and with instance predictor I get <0.7 AUC which clearly doesn't fit with what paper claims)