cocodataset / cocoapi

COCO API - Dataset @ http://cocodataset.org/
Other
6.05k stars 3.75k forks source link

Not Getting Perfect Scores in AR Metrics Using Ground Truth Bounding Boxes #639

Open nimakasipour opened 1 year ago

nimakasipour commented 1 year ago

Why am I not getting a perfect score of 1.00 for all the metrics in Average Recall (AR) when using ground truth bounding boxes from "instances_val2017.json" in evaluation?

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 1.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 1.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.563 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.975 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 1.000

iker-lluvia commented 1 year ago

Duplicate issue #426, atlhough the solution has not been published.