ultralytics / yolov3

YOLOv3 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
10.16k stars 3.44k forks source link

Default mAP Calculations test.py #376

Closed ktian08 closed 5 years ago

ktian08 commented 5 years ago

Hi,

I was just wondering how the mAPs in the README were calculated, since it seems the defaults in test.py and data/coco.data point to the validation set as the "test dataset." The mAPs are being compared to darknet's mAPs, but on the official YOLO site the author said he used COCO test-dev, which is a separate dataset specifically used for the test dataset. I'm assuming the defaults were not used in the mAPs calculations for this library, but rather the test-dev set was used?

ktian08 commented 5 years ago

Running test.py using YOLOv3-416 (not spp) with yolov3.weights on the 2014 COCO 5k validation dataset yields 0.5409433911633746 as the mAP for me, not the reported 0.554 so I am assuming 0.554 was generated from the test servers?

glenn-jocher commented 5 years ago

0.554 shown on the README is the official pycocotools output which can be used on COCO data. Evaluation is done on the 5000 images in data/5k.txt. To reproduce run:

python3 test.py --weights weights/yolov3.weights --cfg cfg/yolov3.cfg --save-json