eric612 / MobileNet-YOLO

A caffe implementation of MobileNet-YOLO detection network
Other
864 stars 442 forks source link

Inference time for python is too high #228

Open Jucjiaswiss opened 4 years ago

Jucjiaswiss commented 4 years ago

Hi, I used model models/mobilenetv2_voc/yolo_lite/train_pruned.prototxt for training and models/mobilenetv2_voc/yolo_lite/yolov3_lite_deploy_pruned.prototxt for test, test script used examples/yolo/detect.py for referece, the inference time is 500ms, which is too slow, is there anything wrong? or what can i do? traing: GPU, ubuntu 16.04

AnmachenGuo commented 4 years ago

in detect.py, the default inference mode is cpu (in main function caffe.set_mode_cpu()) you should change it into caffe.set_mode_gpu()

Jucjiaswiss commented 4 years ago

@AnmachenGuo Thanks for help! yeah,I tried that, it turned out to be the right result(only 30ms). But the cpu time is still comsuming (500ms), what possible measures chould be done to improve?