Closed ofrimasad closed 3 years ago
@ofrimasad yes, you've discovered a latent bug in the mAP computation that affects testing at higher --conf threshold. I've opened an issue on this myself, and it is currently on our TODO list in https://github.com/ultralytics/yolov5/issues/1466
There's also a second, much smaller disconnect between the two mAP calculations, which means that even if the above bug is fixed, pycocotools will typically report mAPs about 1% higher than ours, for both mAP@0.5 and mAP@0.5-0.95. I spent some time trying to track this down, but ultimately abandoned the effort. If you've gleaned any insight into this I'd be very interested to know how to align our results more closely (without sacrificing much speed if possible).
@ofrimasad the PR that caused this change BTW is https://github.com/ultralytics/yolov5/pull/1206
Thank you! reverting these two line actually fix this issue
thank you :)
@ofrimasad this issue should be officially resolved now in PR #1645.
Thank you for your contributions. Please let us know if you come across any other issues or you see any other areas for improvement.
amazing! thanks
❔Question
Hi there :) I was testing with different 'conf-thres' values and I have noticed that when setting extremely high values, the calculated mAP@.5:.95 differs very much from the IoU=0.50:0.95 metric calculated by the cocoapi.
I have ran:
python test.py --data data/coco.yaml --conf-thres 0.7
and got the results:mAP@.5:.95 = 0.729
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.200
What is the difference between the internal measuring and the coco API measuring? What does the coco API take into account that changes the results so dramatically?
Thanks you :)
Additional context