Open patel-zeel opened 3 months ago
Yup, it is not perfect. We try to apply traditional computer vision metrics to models that exist outside of traditional computer vision space. In the case of Florence-2, it is a VLM. When VLM performs object detection, all of the boxes have the same probability - confidence 100%.
Thank you for your response, @SkalskiP. I was wondering what a fair comparison would be in such cases. For example, should we also convert traditional models' confidence scores to 1 before computing mAP?
I don't know how to do it now. However, given the growth of VLMs over the past 1-2 years, I think it will be an important issue if we measure VLM performance.
@patel-zeel, your question motivated me to reach out to Lucas Beyer, he is leading the team behind PaliGemma. Looks like there is no better way to do it than just mAP with confidence = 100. He suggested using both AP and AR for more diverse comparison.
@SkalskiP Thank you for the update and follow-up on this! Great to hear the feedback from the PaliGemma lead.
He suggested using both AP and AR for a more diverse comparison.
If I understand correctly, it means,
That sounds reasonable and motivates me to look even deeper into this.
That's what I'll do for now. Yup.
Search before asking
Notebook name
Fine-tuning Florence-2 on Object Detection Dataset
Bug
Predictions from Florence-2 fine-tuned model look like the following:
It seems that the confidence score is always
1
. Wouldn't this cause an issue in creating the precision-recall curve followed by computing mAP?Environment
NA
Minimal Reproducible Example
NA
Additional
NA
Are you willing to submit a PR?