Closed DuZzzs closed 4 years ago
@kfxw
@DuZzzs Hi! The speed of inference code is not optimized. Some python custom ops are used and they are the bottlenecks. Only the training speed is guaranteed in this code. As for the accuracy, I used 'coco_train2014 + coco_valminusminival2014' for training. This is the common practice. I remember the mAP fluctuated between around 36.8 to 37.2. You can try this training setting.
@kfxw ok. Thank you very much.
The results of my training are far from the official code.
modified:
results: test on coco minival2014. my result : map = 34.7, inference time: 167 ms/pic official result : map = 37.1, inference time: 50 ms/pic Is this a problem of implementation or my operation?