rafaelpadilla / review_object_detection_metrics

Object Detection Metrics. 14 object detection metrics: mean Average Precision (mAP), Average Recall (AR), Spatio-Temporal Tube Average Precision (STT-AP). This project supports different bounding box formats as in COCO, PASCAL, Imagenet, etc.
Other
1.09k stars 215 forks source link

About AP's algorithm accuracy #78

Closed Zengyf-CVer closed 2 years ago

Zengyf-CVer commented 3 years ago

@rafaelpadilla Hello, I have used your software initially, and it feels pretty good, but I have some doubts. I got some data through verification on YOLOv5, as shown in the figure: image

So I used your software to get some data:

COCO METRICS:
AP: 0.763537920449568
AP50: 0.9722419379479782
AP75: 0.9184790254426233
APsmall: 0.6358333595976987
APmedium: 0.842775285070108
APlarge: 0.8797579757975797
AR1: 0.624538511802663
AR10: 0.7942906150453319
AR100: 0.7942906150453319
ARsmall: 0.6921082621082619
ARmedium: 0.8623453637498582
ARlarge: 0.8833333333333333

PASCAL METRIC (AP per class)
prohibitory: 0.9975189505121015
danger: 0.9801029159519725
mandatory: 0.9450989163747956

PASCAL METRIC (mAP)
mAP: 0.9742402609462898

I compared mAP@0.5 and mAP@[.5:.95] of the two. As the data above,The AP50 value of YOLOv5 is 0.975, and the AP50 of your algorithm is 0.9722 (COCO) and 0.974 (VOC). The difference is not very large and acceptable, but the AP value of the two is quite different. The AP value of YOLOv5 is 0.814, and the AP of your algorithm is 0.7635 (COCO). I don't know why?

The evaluation index of YOLOv5 uses your old algorithm: https://github.com/ultralytics/yolov5/blob/cce7e78faed801377ccbdb04ebfd8cc45ad28ed9/utils/metrics.py#L21-L81 I don’t know if you have upgraded the old algorithm?

rafaelpadilla commented 3 years ago

Hi @Zengyf-CVer

Every work uses a different approach of interpolating the points.

Even though YOLOv5 repo cites our previous repo, they compute the AP in a different manner. You can compare both codes and see they are different.

It seems they user another approach, while our other repo considers the PASCAL VOC way of computing the AP, which is either all points or 11 points for interpolation.

Why do they use 1000 points for interpolation as it is seen here?

I would suggest you to user either our tool or the official COCO or PASCAL tools. With our code you will obtain exactly the same results from the official tools.

Regards, Rafael

Zengyf-CVer commented 3 years ago

@rafaelpadilla I have also been researching this issue during this period and have made some progress: First, I studied the source code of pycocotools, and the results obtained are highly consistent with your project. Secondly, you said that yolov5 uses 1000 point interpolation. I have some questions about this. https://github.com/ultralytics/yolov5/blob/cce7e78faed801377ccbdb04ebfd8cc45ad28ed9/utils/metrics.py#L44 The above line of code actually does the PR curve and obtains the horizontal and vertical coordinates. I think yolov5 provides two interpolation methods, one is 101-point interpolation and the other is full-point interpolation, such as the following code: https://github.com/ultralytics/yolov5/blob/cce7e78faed801377ccbdb04ebfd8cc45ad28ed9/utils/metrics.py#L101-L107 As you can see from the code, yolov5 uses COCO's 101-point interpolation by default, but I didn't figure out the existence of that 1000-point interpolation?

github-actions[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.