zzh8829 / yolov3-tf2

YoloV3 Implemented in Tensorflow 2.0
MIT License
2.51k stars 905 forks source link

yolov3 tiny - evaluation on pretrained weights gives lower accuracy than expected #391

Open chenhayat opened 2 years ago

chenhayat commented 2 years ago

According to https://pjreddie.com/darknet/yolo/ website accuracy on pre-trained weights using Coco dataset should be 0.331 mAP: Model | Train | Test | mAP | FLOPS | FPS | Cfg | Weights YOLOv3-tiny | COCO trainval | test-dev | 33.1 | 5.56 Bn | 220 | cfg | weights

However when I evaluate it on Coco 2017 with 2000 images I get: 0.161 mAP. Update: it seems that pre-trained weights were generate with a different mask than the mask in repo. After changing the mask from: yolo_tiny_anchor_masks = np.array([[3, 4, 5], [0, 1, 2]]) to: yolo_tiny_anchor_masks = np.array([[3, 4, 5], [1, 2, 3]]) I get a better accuracy for the baseline: 0.252.

But still the accuracy is lower than expected 0.331 mAP. What is the baseline accuracy you got for the tiny model? Do you have any suggestion what needs to be changed in order to get it ?

lilian-zh commented 2 years ago

Hello, do you mind advise how to evaluate using mAP, since there is no related script in the repo? Thank you.