dbolya / tide

A General Toolbox for Identifying Object Detection Errors
https://dbolya.github.io/tide
MIT License
702 stars 115 forks source link

TIDE outputs vs. what pycocotools outputs #14

Closed kdk2612 closed 3 years ago

kdk2612 commented 3 years ago

Hi, first things first.. This lib is amazing and helped a lot to understand the errors related to the detections. I was doing using this project for the initial evaluation but since there was no support for Recall, I decided to use the pycocotools for evaluation as well.

Now, during the comparison I got different results for the AP[0.50-0.95] pycocotools gives- 0.460 TIDE gives - 41.33

Also, pycocotools gives - AP @ 50 = 0.804 TIDE gives - AP @ 50 = 70.93 (extracted from the summary table)

I was wondering where the difference comes from, exploring how the TP, FP, FN are calculated for now.

dbolya commented 3 years ago

Hi, and thanks for making this issue. TIDE should have 100% parity with pycocotools when it comes to mAP calculation, so this difference shouldn't be happening.

Is this on COCO or a custom COCO-style dataset? I'm thinking if it's a custom dataset, there may be some edge case that TIDE and pycocotools are handling differently which results in the different mAP.

kdk2612 commented 3 years ago

Hi, This is on custom COCO-style dataset. I am converting the data into the COCO format to run the evaluation. I identified the issue anyways. For using the pycocotools I had set the "useCats" to 1, which ignores the category labels. Seems to be working the same after setting it to 0.

dbolya commented 3 years ago

Nice! Then I'll close this issue.

BartvanMarrewijk commented 2 years ago

Maybe also good to mention that I had the same problem, but my solution was different. If your custom coco format annotation id starts with 0, then this annotation will not be taken into account in the coco format!!! The easiest solution is just make sure that the annotation id starts with 1. For more information see following git issue #https://github.com/cocodataset/cocoapi/issues/507

willyd commented 2 years ago

I had a similar issue where both my groundtruth and predictions where in the COCO annotations format so I loaded both with tidecv.datasets.COCO. This ignores the score field in the detections which have to be loaded using tidecv.datasets.COCOResult.