Open lulud41 opened 3 years ago
I faced the same issue!
I figured out the problem. When the annotation ids start from 0, it does not return the right metric values!! Just start from 1 and it is fixed for me. https://github.com/cocodataset/cocoapi/pull/332
Hi, I'm testing the Python API. I wanted to check if I could get 1.0 AP when the prediction is strictly equal to the ground truth, wich seems obvious. However, I get 0.73 AP@0.5 and other weard stuff. I'm using a custom annotation file
Am I doing something wrong ? Thanks in advance,
Here is my code :
coco_gt = COCO("annotations.json") cooc_det = coco_gt.loadRes(pred) coco_eval = COCOeval(coco_gt, cooc_det, "bbox")
coco_eval.evaluate() coco_eval.accumulate() coco_eval.summarize()