Open AlexeyAB opened 5 years ago
@AlexeyAB Hi, Thanks for sharing these! These are super helpful.
Quick question, did you try verifying your mAP script result matches the one mentioned in Yolo3 paper on the COCO test-dev set? Thanks!
@pjspillai Hi,
There are no public annotations for COCO test-dev, so there is nothing to use with detector map
command. But there is only 2014 Testing Image info [1MB]
that doesn't contain bboxes: http://cocodataset.org/#download
How to check Yolo v3 on COCO test-dev set on evaluation server is described here: https://github.com/AlexeyAB/darknet/issues/2145
You can check yolov3-spp.weights model on COCO 2014 val-set 5k
, that you can get by using this script: https://github.com/AlexeyAB/darknet/blob/master/scripts/get_coco_dataset.sh
And get mAP@0.5 ~= 59.3%
@AlexeyAB Hi, Thanks for your elegant solution! Could you provide a more convenient method to calc AP@[.5, .95]? e.g.:
./darknet detector map cfg/coco.data cfg/yolov3-spp.cfg yolov3-spp.weights -points 101 -iou_thresh 0.5~0.95
Thanks!
Here we provide YOLOv3-320 evaluation results on COCO val 2017 by this implemention and cocoapi respectively. It seems that there are some difference between them.
repo | AP50 | AP75 | script |
---|---|---|---|
cocoapi | 63.6 | 35.6 | ./darknet detector valid cfg/coco.data cfg/yolov3.cfg yolov3.weights -out yolov3-320, then eval the yolov3-320.json with cocoapi |
darknet-AlexeyAB | 62.29 | 34.55 | ./darknet detector map cfg/coco.data cfg/yolov3.cfg yolov3.weights -points 101 -iou_thresh 0.5 or 0.75 |
COCOAPI: (https://github.com/cocodataset/cocoapi)
This implemention (AP50 & AP75):
Thanks for your attention.
@ChenJoya Hi,
cocoeval.py
?There are additional params (iscrowd
) in the COCO dataset which can be taken into account by the Pycocotool script, but Yolo can't, because Yolo labels don't support them.
What mAP can you get if you change this line: https://github.com/cocodataset/cocoapi/blob/ed842bffd41f6ff38707c4f0968d2cfd91088688/PythonAPI/pycocotools/cocoeval.py#L281
to this if gtm[tind,gind]>0:
And what mAP can you get if you do previous change and change this line: https://github.com/cocodataset/cocoapi/blob/ed842bffd41f6ff38707c4f0968d2cfd91088688/PythonAPI/pycocotools/cocoeval.py#L284
to this: if m>-1 and gtIg[m]==0:
@AlexeyAB can you please confirm the command for running mAP (or evaluation for that matter) on PASCAL-VOC 07? I see you have it listed as
./darknet detector map cfg/coco.data cfg/yolov3-spp.cfg yolov3-spp.weights -points 11
but the reference to coco.data
seems wrong. I also get issues running this command as it looks for the file coco_testdev
. I imagine the line should be
./darknet detector map cfg/voc.data cfg/yolov3-voc.cfg yolov3-voc.weights -points 11
However, I am having trouble finding the yolov3-voc.weights
.
If I try to run it with the yolov3-spp.weights
and yolov3-spp.cfg
it also does not work (number of classes mismatch). I believe that those weights are also trained on MS-COCO since the cfg
is set to 80
classes. The same happens with yolov3.weights
and yolov3.cfg
.
Can you please confirm the command to run on PASCAL-VOC07
, and if you have a link to the pre-trained weights that would be immensely appreciated.
There is no trained model of Yolo v3 for Pascal VOC.
There is model Yolo v2 for Pascal VOC: https://github.com/AlexeyAB/darknet#pre-trained-models yolov2.cfg (194 MB COCO Yolo v2) - requires 4 GB GPU-RAM: https://pjreddie.com/media/files/yolov2.weights yolo-voc.cfg (194 MB VOC Yolo v2) - requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo-voc.weights
How to check the mAP on Pascal VOC: https://github.com/AlexeyAB/darknet#how-to-calculate-map-on-pascalvoc-2007
Thank you for the quick reply,
Cheers
Different approaches of mAP (mean average precision) calculation:
-map_points 101
for MS COCO-map_points 11
for PascalVOC 2007 (uncommentdifficult
in voc.data)-map_points 0
for ImageNet, PascalVOC 2010-2012 and your custom datasetFor example:
use this command to calculate
mAP@0.5
for ImageNet, PascalVOC 2010-2012 and your custom dataset:./darknet detector map cfg/coco.data cfg/yolov3-spp.cfg yolov3-spp.weights
use this command to calculate
mAP@0.5
for PascalVOC 2007 dataset:./darknet detector map cfg/coco.data cfg/yolov3-spp.cfg yolov3-spp.weights -points 11
use this command to calculate
mAP@0.5
for MSCOCO dataset:./darknet detector map cfg/coco.data cfg/yolov3-spp.cfg yolov3-spp.weights -points 101 -iou_thresh 0.5
use these commands to calculate
mAP@[.5, .95]
for MSCOCO dataset:Then calculate: AP@[.5, .95] =
mAP@IoU=0.50:.05:.0.95
=(mAP@IoU=0.50 + mAP@IoU=0.55 + mAP@IoU=0.60 + mAP@IoU=0.65 + mAP@IoU=0.70 + mAP@IoU=0.75 + mAP@IoU=0.80 + mAP@IoU=0.85 + mAP@IoU=0.90 + mAP@IoU=0.95) / 10
I.e.
MS COCO uses
101 points
on Precision-Recall curve: https://github.com/cocodataset/cocoapi/blob/ed842bffd41f6ff38707c4f0968d2cfd91088688/PythonAPI/pycocotools/cocoeval.py#L507-L508PascalVOC 2007 uses
11 points
on Precision-Recall curve: https://github.com/rbgirshick/py-faster-rcnn/blob/781a917b378dbfdedb45b6a56189a31982da1b43/lib/datasets/voc_eval.py#L37-L45PascalVOC 2010-2012 and ImageNet uses
each unique point
on Precision-Recall curve - i.e. calculates Area Under Curve without approximation: https://github.com/rbgirshick/py-faster-rcnn/blob/781a917b378dbfdedb45b6a56189a31982da1b43/lib/datasets/voc_eval.py#L46-L61URLs:
https://mc.ai/which-one-to-measure-the-performance-of-object-detectors-ap-or-olrp/
http://host.robots.ox.ac.uk/pascal/VOC/voc2012/htmldoc/devkit_doc.html#sec:ap
https://medium.com/@jonathan_hui/map-mean-average-precision-for-object-detection-45c121a31173
https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html