Open Michael-J98 opened 4 years ago
@Michael-J98 I have also encountered this problem. In my case, also 1 image with 1 GT, but predictions with many FPs and TPs lead to mAP 1.0. Tried to improve and look bugs, still no, hard to use for true validation.
I evaluated a dataset containing only one image. The prediction apparently has a FP, but the metric AP(IOU=0.5)=1. Has anyone met this before. I didn't call the API directly, but through "engine.evaluate" from pytorch. Does it matter?