meituan / YOLOv6

YOLOv6: a single-stage object detection framework dedicated to industrial applications.
GNU General Public License v3.0
5.69k stars 1.02k forks source link

Different evaluation results when not using coco_metric #881

Open buyukkanber opened 1 year ago

buyukkanber commented 1 year ago

Before Asking

Search before asking

Question

Hello!

When I evaluate the val set, based on my trained (single class custom dataset) final model weight, with argparsers set as '--do_coco_metric=True' and '--do_pr_metric=False', I get the result below

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.713
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.846

However, when I change it as the following , '--do_coco_metric=False', and ,'--do_pr_metric= True', I get the result below

  P@.5iou     R@.5iou      mAP@.5     mAP@.5:.95
    0.972      0.944        0.962       0.8

There is a quite difference between 0.713 and 0.8 scores. Which one is the correct result? Could you please elaborate this change on metrics scores?

Additional

No response

Chilicyy commented 1 year ago

Hi @buyukkanber , do_coco_metric means using pycocotools to evaluate metrics, which is common applied in papers and competitions, while do_pr_metric means using evaluation scripts similar to yolov5 and yolov7, which you can compare the results among some other methods in convenience.