Closed cjnjuwhy closed 4 years ago
The efficientdet paper describes results on COCO. Did somebody already tried a training on COCO instead of VOC? The current training settings are also not yet according to the settings used by the paper (see the first paragraph of section 5 "experiments").
@cjnjuwhy No, I haven't tried on coco, will do that in the next few days. @florisdesmedt Yes, some settings are copied from keras-retinanet just for convenience. When try to reproduce the results reported in the paper, we need to tune these hyper-params.
Great, getting results on COCO would be a good indication to know how close this implementation is to the paper results :+1: yesterday I tried to setup a COCO experiment myself, but ran into some issues:
I did a training on the COCO2017 training set, taking over the training regime from keras-retinanet (learning rate, learning rate decay callbacks, ...). Evaluating on the COCO 2017 validation set gives the following results: Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.245 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.390 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.258 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.083 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.276 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.405 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.242 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.377 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.397 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.113 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.467 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.647
I think it the first one is used in the paper, so 24.5% mAP vs 32.4% mAP mentioned in the paper for D0.
trained on the coco2017 by my impl code , gives 26.5mAP, still far from 32.4 in the paper, which make me think that some details of the paper are covered up. maybe?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
trained on the coco2017 by my impl code , gives 26.5mAP, still far from 32.4 in the paper, which make me think that some details of the paper are covered up. maybe?
hello,Thank you very much for your information. How long did it take you to train on the coco dataset, and what equipment did you train on?
@huihua-season about one day for 8 gpus , batch 16 per gpu, B3 gives me 37.4%mAP , APsmall is very low for b0
@huihua-season D0 trained again, achieved 32.0% mAP, batch 12 per gpu, 8 1080ti , 130epoch
@ huihua-season D0再次训练,达成32.0%mAP,每gpu批次12,8 1080ti,130epoch Thank you for your information. Can you tell me whether the map result you get is in the test file or for the 5000 pictures eval.
@ huihua-season D0再次训练,达成32.0%mAP,每gpu批次12,8 1080ti,130epoch
You are so powerful. How did you change from 26% map to 32% map for the first time? Could you tell me what adjustment you made if it is convenient? Thank you again
@huihua-season D0 trained again, achieved 32.0% mAP, batch 12 per gpu, 8 1080ti , 130epoch
wow, that is an impressive gain :+1: can you make that model available somehow?
Can you reproduce the results reported in EfficientDet paper?