Closed Zengyf-CVer closed 2 years ago
Hi @Zengyf-CVer
Every work uses a different approach of interpolating the points.
Even though YOLOv5 repo cites our previous repo, they compute the AP in a different manner. You can compare both codes and see they are different.
It seems they user another approach, while our other repo considers the PASCAL VOC way of computing the AP, which is either all points or 11 points for interpolation.
Why do they use 1000 points for interpolation as it is seen here?
I would suggest you to user either our tool or the official COCO or PASCAL tools. With our code you will obtain exactly the same results from the official tools.
Regards, Rafael
@rafaelpadilla I have also been researching this issue during this period and have made some progress: First, I studied the source code of pycocotools, and the results obtained are highly consistent with your project. Secondly, you said that yolov5 uses 1000 point interpolation. I have some questions about this. https://github.com/ultralytics/yolov5/blob/cce7e78faed801377ccbdb04ebfd8cc45ad28ed9/utils/metrics.py#L44 The above line of code actually does the PR curve and obtains the horizontal and vertical coordinates. I think yolov5 provides two interpolation methods, one is 101-point interpolation and the other is full-point interpolation, such as the following code: https://github.com/ultralytics/yolov5/blob/cce7e78faed801377ccbdb04ebfd8cc45ad28ed9/utils/metrics.py#L101-L107 As you can see from the code, yolov5 uses COCO's 101-point interpolation by default, but I didn't figure out the existence of that 1000-point interpolation?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@rafaelpadilla Hello, I have used your software initially, and it feels pretty good, but I have some doubts. I got some data through verification on YOLOv5, as shown in the figure:
So I used your software to get some data:
I compared mAP@0.5 and mAP@[.5:.95] of the two. As the data above,The AP50 value of YOLOv5 is 0.975, and the AP50 of your algorithm is 0.9722 (COCO) and 0.974 (VOC). The difference is not very large and acceptable, but the AP value of the two is quite different. The AP value of YOLOv5 is 0.814, and the AP of your algorithm is 0.7635 (COCO). I don't know why?
The evaluation index of YOLOv5 uses your old algorithm: https://github.com/ultralytics/yolov5/blob/cce7e78faed801377ccbdb04ebfd8cc45ad28ed9/utils/metrics.py#L21-L81 I don’t know if you have upgraded the old algorithm?