Closed duanyao closed 3 years ago
Hi @duanyao,
Thanks for your comment.
Our code is based on the official COCO evaluation tool. So we correctly implement IOU verification as:
if ious[d_idx, g_idx] < iou:
continue
Please, see here that our tool follows COCO's official implementation.
It seems that the other tool you are using defines the IOU threshold differently than the COCO's official tool.
Best regards, Rafael
Hi Rafael,
You are right, COCO evaluation tool gives same result as your tool.
P.S. I was wrong with openvino's accuracy checker. I checked its code, and its' true positive is also defined as iou >= iou_threshold
, same as COCO's official implementation. However, it mixes float32 and float64, i.e. iou
is numpy.float32
while iou_threshold
is float
. This makes difference for a sample with IOU == 0.7 , because numpy.float32(7)/numpy.float32(10) < float(7)/float(10)
.
In the README.md, true positive is defined as
iou > iou_threshold
("let us first consider as TP the detections with IOU > 50%"). However, true positive is defined asiou >= iou_threshold
in coco_evaluator.py:I have changed the
<
to<=
above, and it did make difference for COCO AP result: 0.6042 vs 0.5880 for the example in the README.md.I also tested openvino's accuracy checker, and the COCO AP result is 0.5880, which means it defines true positive defined as
iou > iou_threshold
.So I think
coco_evaluator.py
may be fixed as shown above.I have made the example in the README.md into COCO annotation and detection files, so you can test by yourself (remove ".txt" before use): gt1.coco.json.txt det1.coco.json.txt