rafaelpadilla / review_object_detection_metrics

Object Detection Metrics. 14 object detection metrics: mean Average Precision (mAP), Average Recall (AR), Spatio-Temporal Tube Average Precision (STT-AP). This project supports different bounding box formats as in COCO, PASCAL, Imagenet, etc.
Other
1.08k stars 215 forks source link

Question about precision x curve calculation #151

Closed luizwritescode closed 2 months ago

luizwritescode commented 2 months ago

During my research I've found the Precision x Recall curve can be calculated two different ways. After matching predictions with ground truth boxes based on a IoU threshold, whenever possible, and ranking all predictions by confidence score, you must...

  1. (this paper) For each prediction, calculate cumulative precision and recall. This means each prediction generates a point in the PR curve for a specific confidence threshold.

  2. Set a confidence threshold of 0 and only look at predictions with confidence score equal to or higher than confidence threshold. Calculate precision and recall. That's one point in the PR Curve. Keep increasing the threshold until you have enough points in PR curve. This means each confidence threshold generates a point in the PR curve.

I understand both methods are correct, but are analyzing different things. Which of the two methods is the one used to calculate AP and mAP?

rafaelpadilla commented 2 months ago

The second approach considers each prediction’s impact on cumulative precision and recall, providing a comprehensive view of the model’s performance across all confidence levels. It is the standard method used in evaluation protocols like PASCAL VOC and MS COCO, and it’s the one described in our paper.

Method 2 seems to be more commonly associated with binary classification tasks and is less suited for calculating AP and mAP in object detection.

rafaelpadilla commented 2 months ago

I believe the question was answered. :+1: So, I am closing this issue. Please, reopen it if you think it is needed. :pray: