Open Jucjiaswiss opened 4 years ago
in detect.py, the default inference mode is cpu (in main function caffe.set_mode_cpu()) you should change it into caffe.set_mode_gpu()
@AnmachenGuo Thanks for help! yeah,I tried that, it turned out to be the right result(only 30ms). But the cpu time is still comsuming (500ms), what possible measures chould be done to improve?
Hi, I used model models/mobilenetv2_voc/yolo_lite/train_pruned.prototxt for training and models/mobilenetv2_voc/yolo_lite/yolov3_lite_deploy_pruned.prototxt for test, test script used examples/yolo/detect.py for referece, the inference time is 500ms, which is too slow, is there anything wrong? or what can i do? traing: GPU, ubuntu 16.04