rafaelpadilla / review_object_detection_metrics

Object Detection Metrics. 14 object detection metrics: mean Average Precision (mAP), Average Recall (AR), Spatio-Temporal Tube Average Precision (STT-AP). This project supports different bounding box formats as in COCO, PASCAL, Imagenet, etc.
Other
1.08k stars 215 forks source link

Getting precision, recall, F1 values #144

Closed ojasvijain closed 10 months ago

ojasvijain commented 10 months ago

Hi, Thank you for such a comprehensive library.

I trying to find the metrics using object detection based detected class & ground truth. (FYI - i just have 1 class).

I used the coco evaluator to get my metrics. and got the following result: COCO metric: AP [.5:.05:.95]: 0.062852 AP50: 0.109215 AP75: 0.057404 AP Small: nan AP Medium: nan AP Large: 0.063207 AR1: 0.000552 AR10: 0.008840 AR100: 0.076657 AR Small: nan AR Medium: nan AR Large: 0.076657

I want to get the precision, recall & F1 score specifically. On inspecting the /src/evaluators/coco_evaluator.py file, I found that coco_metrics variable has the metrics that I need. However, on printing the results from the same I am getting: total positives: 724, TP: 60, FPL 40

  1. Could you please explain how TP + FP does not add up to total positives
  2. How can I interpret the values from the "precision" and "recall" dictionary result?

Thanks!

github-actions[bot] commented 10 months ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

arnaud-nt2i commented 8 months ago

I also think getting precision, recall, F1 values would be good.