dbolya / tide

A General Toolbox for Identifying Object Detection Errors
https://dbolya.github.io/tide
MIT License
702 stars 115 forks source link

Per class recall #11

Closed aorad closed 3 years ago

aorad commented 3 years ago

Do you plan on supporting per class metrics?

dbolya commented 3 years ago

Support already exists, but is buried a little deeper in the code.

However, recall will not be supported. Just use COCOEval for that.

kdk2612 commented 3 years ago

How can we access the per class metric? @dbolya

dbolya commented 3 years ago

@kdk2612

run = tide.evaluate(gt, results)

for class_id, ap_data in run.ap_data.objs.items():
    print('{:10s}: {:.2f}'.format(str(class_id), ap_data.get_ap()))

This will print out the AP for each class ID individually for the given run.

kdk2612 commented 3 years ago

I am getting different results using this vs the pyCoco tools. Is there a reason for that?

dbolya commented 3 years ago

The code I showed gives you the per-class mAP @ 50. the COCO evaluation toolkit probably gives you @ 50:95.

kdk2612 commented 3 years ago

No not using the code, using the entire project.

kdk2612 commented 3 years ago

the mAP values for the evaluation are different

dbolya commented 3 years ago

Can you create a new issue for this with what TIDE outputs vs. what pycocotools outputs?