Closed mischlox closed 2 years ago
@mischlox which .cfg and .weights file do you use to convert and how you eval the mAP?
For evaluation I used eval.py from this repository.
After further testing, I found out that I did not adjust confidence threshold properly to compare with Darknet. After this the results are more reasonable.
Nevertheless I tested from the YoloV4 model zoo the yolov4-leaky variant in different resolutions (416x416, 512x512, 608x608) I get the following mAP results on testdev-2017:
512x512: 40.7 416x416: 40.0 608x608: 39.4
mAP in 608x608 should normally be higher than 512x512.
eval.py use a simplified way for calculating MSCOCO AP, so it may be somehow different with official pycocotools result. You can refer evaluation to get officical result. Also I notice that for different resolutions Darknet cfg may use different anchors value. See yolov4-leaky-416.cfg and yolov4-leaky.cfg
I only made the json file with the bboxes with the eval.py script. So I used that implementation of pre- and postprocessing. The detection results were evaluated on COCO server to get mAP on test dataset. The anchor boxes and input sizes were also adjusted.
Do you get same results as in Darknet Yolov4 reference mAP evaluation?
I read through the issues and found #82 :
I am doing comparisons and found difference in how you match anchors and how ultralytics and darknet YOLOv4 does. This repo does gt box matched with highest iou anchor. I believe this was the standard approach till YOLOv3. YOLOv4 in bag of freebies has "IT: IoU threshold - using multiple anchors for a single ground truth IoU (truth, anchor) > IoU threshold". This is present in ultralytics code but not in this repo.
Is this still an issue and could this be the reason for the varying mAP in different model sizes i could observe?
@mischlox actually I didn't compare the mAP result with Darknet reference or COCO server. The issue you mentioned is just for loss calculation during training, so should have no impact with the conveted model
ok alright. thank you very much for this information!
Hey @david8862, I tried to convert YoloV4-Leaky from Darknet Model Zoo to Tensorflow. Unfortunately I could find out that the mAP behaves weirdly? I get on coco-testdev2017 with confidence threshold 0.001 AP0.50:0.95 of only 0.365 Furthermore I also compared the 416x416 variant with the 512x512 variant of this model on val2017 and could find out that the mAP is higher on 416x416, which confuses me a lot. Is it possible that something with the pre-/postprocessing is wrong or what could be the issue to this?