MediaBrain-SJTU / MVFA-AD

[CVPR2024 Highlight] Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images
MIT License
110 stars 16 forks source link

Some questions on model evaluation results #12

Open choi-jiwoo opened 2 months ago

choi-jiwoo commented 2 months ago

Hi, while reading the paper, I came up with some questions.

1 - As you mentioned in Section 6.2, "MVFA’s advantage lies in its ability to effectively utilize a few abnormal samples", does that mean MVFA is not capable of "few-normal-shot" setting?

2 - How did you measure performance of WinCLIP model when the authors did not release their official code? Did you reimplemented the code yourself(or team) or use unofficial implementation?

3 - In Table 3 in-domain (MVTec) part, are the AUC scores of MVFA averaged over 5 random seeds like WinCLIP and April-GAN?

Thank you in advance!