Open ZhengRui opened 4 years ago
I don't know the reason. If you will find mistake in my code or pycocotool, let me know )
For checking accuracy for MSCOCO models, I use pycocotool or codalab-evaluation server.
I thought of some potential reasons:
On the pycocotools side:
self.maxDets = [1, 10, a_big_number_like_5000]
to use all detections. This changes map@IoU=0.5 very little.
https://github.com/cocodataset/cocoapi/blob/8c9bcc3cf640524c4c20a9c40e89cb6a2f2fa0e9/PythonAPI/pycocotools/cocoeval.py#L508On the darknet side, there are some discrepancies between using ./darknet detector valid
and ./darknet detector map
validate_detector()
uses thresh=0.001
while validate_detector_map
uses thresh=0.005
https://github.com/AlexeyAB/darknet/blob/master/src/detector.c#L685
https://github.com/AlexeyAB/darknet/blob/master/src/detector.c#L948thresh
, valid
and map
will generate same boxes as i notice some differences here
https://github.com/AlexeyAB/darknet/blob/master/src/detector.c#L732
https://github.com/AlexeyAB/darknet/blob/master/src/detector.c#L1013-L1018I have tried to not ignore iscrowd boxes, and not filtering using maxDets on pycocotools, while using same thresh=0.001
and same get_network_boxes
parameters for valid
and map
on darknet (used -letter_box option in the command), but still not able to get same map@IoU=0.5. I haven't checked the downstream logic that calculates pr curve and map. I focused on ensuring detections_count
equal to the number of boxes sent to pycocotools, but still not able to make it.
Do you have any further thoughts @AlexeyAB ? Thanks.
@ZhengRui I don't know. Try to un-comment this line and recompile: https://github.com/AlexeyAB/darknet/blob/0ef5052ee51e82b2862fab5e9135b7bae060354f/src/detector.c#L1281
Try to use the same thresh=0.001
in both cases.
Also try to set 11 pr-points instead of 101 points in both Darknet and Pycocotool, for easier debugging.
Then compare Precision and Recall for one of classes between Darknet and Pycocotool (but not for person-class to avoid crowd issue).
@ZhengRui Is there any progress?
@tand826 Unfortunately I didn't got time to further look into this
@AlexeyAB Thanks for the great work ! I followed https://github.com/AlexeyAB/darknet/issues/2145#issuecomment-451908890 to get the map@IoU=0.5 of
yolov4.weights
model on COCO2017 validation set.Method 1:
./darknet detector map ~/Work/Datasets/yolo_data/coco2017/coco.data cfg/yolov4.cfg weights/yolov4.weights -iou_thresh 0.50 -points 101
gives me map@IoU=0.5 73.54, the end of the log is like this:Set -points flag:
-points 101
for MS COCO-points 11
for PascalVOC 2007 (uncommentdifficult
in voc.data)-points 0
(AUC) for ImageNet, PascalVOC 2010-2012, your custom datasetpython coco2017_data/coco_eval.py ../../Datasets/coco/annotations/instances_val2017.json ./results/coco_results.json bbox
gives map@IoU=0.5 74.9, and the log is:Do you know why these two methods give different map@IoU=0.5, maybe I misunderstood something?