IDEA-Research / DINO

[ICLR 2023] Official implementation of the paper "DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection"
Apache License 2.0
2.15k stars 232 forks source link

Per category evaluation #151

Open david-rohrschneider opened 1 year ago

david-rohrschneider commented 1 year ago

Hello there, is it possible to run the coco evaluation per category or class?

HaoZhang534 commented 1 year ago

You can refer to https://github.com/cocodataset/cocoapi/issues/276

Robotatron commented 1 year ago

It will do per class but for a specific AP threshold. How would we do per class for AP@0.5-0.95 ?

HaoZhang534 commented 1 year ago

@Robotatron Have you tried it? I think it will give the AP for AP@0.5-0.95.

Robotatron commented 1 year ago

@Robotatron Have you tried it? I think it will give the AP for AP@0.5-0.95.

I figured it out. It seems my colleague modified the coco eval code and he had iouThr set to 0.5 here: https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py#L460

I set iouThr to None (as it is in the source) to get AP@0.5-0.95.

david-rohrschneider commented 1 year ago

I just added following lines to cocoeval.py(457-464) and then it worked:


avg_ap = 0.0
if ap == 1:
    for i in range(0, num_classes):
        print('category : {0} : {1}'.format(i,np.mean(s[:,:,i,:])))
        avg_ap +=np.mean(s[:,:,i,:])
    print('(all categories) mAP : {}'.format(avg_ap / num_classes))