Open AlgorithmicIntelligence opened 3 years ago
I just got AP = 21.8% and AP50 = 40.3%
darknet.exe detector valid F:/MSCOCO/coco_f.data cfg/yolov4-tiny.cfg yolov4-tiny.weights
Renamed file in /results
folder to detections_test-dev2017_yolov4tiny416_results.json
and compressed it to detections_test-dev2017_yolov4tiny416_results.zip
Then submitted it to https://competitions.codalab.org/competitions/20794#participate-get-data
Result: stdout.txt
overall performance
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.218
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.403
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.215
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.083
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.261
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.290
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.216
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.366
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.394
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.163
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.462
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.564
Done (t=749.29s)
Thank you for replying rapidly. It seems that every step I did is the same as you, but I got a little inferior result. Could you tell me which branch you use for getting the experiment?
There is only 1 master branch in this repository: https://github.com/AlexeyAB/darknet Download all files again.
Sorry about misleading you, I should ask which commit id you've used
Last commit.
Hello @AlexeyAB , my result is the same as @AlgorithmicIntelligence
overall performance
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.203
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.386
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.194
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.075
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.239
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.272
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.203
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.347
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.374
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.143
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.437
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.544
Done (t=754.73s)
I also followed the instructions in the document, not sure if I was doing wrong at any step.
Thank you for sharing such a fantastic model, but I can't get the same result on the evaluation of 40.2% mAP by YOLOv4-tiny. Even though I followed the instructions on your repository, I can get only 38.6% mAP. The instructions I've followed as below:
How to evaluate AP of YOLOv4 on the MS COCO evaluation server
yolov4-tiny.weights
filecfg/coco.data
should be/results/
folder near with./darknet
executable file./darknet detector valid cfg/coco.data cfg/yolov4-tiny.cfg yolov4-tiny.weights
/results/coco_results.json
todetections_test-dev2017_yolov4-tiny_results.json
and compress it todetections_test-dev2017_yolov4-tiny_results.zip
detections_test-dev2017_yolov4_results.zip
to the MS COCO evaluation server for thetest-dev2019 (bbox)