Open RakshithGB opened 4 years ago
Compile OpenCV with OpenVINO backend: https://github.com/opencv/opencv/wiki/Intel's-Deep-Learning-Inference-Engine-backend
Then just use OpenCV for detection using cfg/weights files: https://docs.opencv.org/master/da/d9d/tutorial_dnn_yolo.html
If it work then just use such code for benchmarking: https://gist.github.com/YashasSamaga/48bdb167303e10f4d07b754888ddbdcf
I'm also using OpenCV to run Yolo models, and I'm getting very slow average processing times: 1300ms for YOLOv3 and 167ms (Ryzen 3 2200G @ 3.5GHz x 4).
I'm using a docker image that runs Linux, and I'm just pip installing OpenCV. The network is used in a simple python script, with the OpenCV DNN module.
I understood that:
1 - You need to convert your model using the OpenVINO Model Optimizer as described here: https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html
2 - Build OpenCV with OpenVINO backend instead of just pip installing it: https://github.com/opencv/opencv/wiki/Intel's-Deep-Learning-Inference-Engine-backend
3 - At your code, use:
net.setPreferableBackend(DNN_BACKEND_INFERENCE_ENGINE);
right after instantiating your dnn.net model.
It is not the priority of my project right now, so if you can make it work, please share with us!
@AlexeyAB No that does not work. I've compiled OpenCV with OpenVino and TBB. When I enable it like this:
net.setPreferableBackend(cv::dnn::DNN_BACKEND_INFERENCE_ENGINE);
net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
And run with cfg/weights file directly, I get:
OpenCV(4.3.0) Error: Unspecified error (> Failed to initialize Inference Engine backend (device = CPU): Error loading XML file: C:\projects\demo\plugins.xml:1:0: File was not found ..\inference-engine\src\inference_engine\ie_core.cpp:148 ) in cv::dnn::InfEngineBackendNet::initPlugin, file C:\Users\Rakshith\Documents\OpenCV\opencv-4.3.0\modules\dnn\src\op_inf_engine.cpp, line 881
It does look like it expects the .bin
and .xml
files instead.
I also tried to use the integrated graphics by setting the target to OpenCL:
net.setPreferableTarget(cv::dnn::DNN_TARGET_OPENCL);
I end up with unkown exception, that target does not even ask for .bin
and .xml
file, it just fails with no specific error:
Falied with Exception: Unknown exception
I've compiled OpenVino 2020 R1 release with OpenCV 4.3.0.
@marcusbrito not sure if AMD cpus get the heavy opimisations of cpu dnn. Just with TBB I'm able to run yolo-v3-tiny and the pruned version close to 60FPS on Intel Core i5 8279U. Will update you after I figure out OpenVINO.
It does look like it expects the
.bin
and.xml
files instead.
Check the link on my previous response, it's the guide to converte your (.cfg, .weights) files to (.bin, .xml)
1 - You need to convert your model using the OpenVINO Model Optimizer as described here: https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html
@marcusbrito not sure if AMD cpus get the heavy optimisations of cpu dnn. Just with TBB I'm able to run yolo-v3-tiny and the pruned version close to 60FPS on Intel Core i5 8279U. Will update you after I figure out OpenVINO.
Even normal OpenCV, without OpenVino, is not optimised for AMD CPUs?
Can you please share how you are using the net? Because I'm running:
import cv2 as cv
net = cv.dnn_DetectionModel('yolov3-tiny.cfg', 'yolov3-tiny.weights')
net.setInputSize(416, 416)
net.setInputScale(1.0 / 255)
net.setInputSwapRB(True)
frame = cv.imread('example.jpg')
with open('coco.names', 'rt') as f:
names = f.read().rstrip('\n').split('\n')
classes, confidences, boxes = net.detect(frame, confThreshold=0.1, nmsThreshold=0.4)
print(classes, confidences, boxes)
As in: https://github.com/opencv/opencv/pull/17185 and getting those really poor results.
It does look like it expects the .bin and .xml files instead.
I can successfully use yolov4.cfg and yolov4.weights files directly without any convertation for inference on CPU, GPU, VPU for both cases with OpenVINO and without it.
1 - You need to convert your model using the OpenVINO Model Optimizer as described here:
No. If you use OpenCV for inference, that you need not convert your model. Use cfg/weights file directly in OpenCV-dnn.
I'm also using OpenCV to run Yolo models, and I'm getting very slow average processing times: 1300ms for YOLOv3 and 167ms (Ryzen 3 2200G @ 3.5GHz x 4).
YOLOv4 512x512 (leaky FP32) achieves 3.5 FPS (285ms latency batch=1) on CPU Core i7-6500k: https://github.com/AlexeyAB/darknet/issues/5079
And YOLOv4 512x512 (Mish, batch=4 FP16) achieves 190 FPS on RTX 2080Ti https://gist.github.com/YashasSamaga/48bdb167303e10f4d07b754888ddbdcf
@AlexeyAB What version of OpenCV and OpenVINO are you on? Did you compile from source or did you use the ready binaries supplied by Intel?
I end up with those errors mentioned previously if I pass the weights and config file directly. Do you suspect something wrong in my steps?
OpenCV 4.4.0-pre compiled by self. OpenVino 2020.R3, Myriad.
net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)
Input 416x416
efficient-b0 395 ms yolov3, 550 ms yolov3-tiny-prn, 168 ms yolov3-tiny, 128 ms yolov4, 940 ms efnet-coco, 395 ms
@ausk did you convert the weights file?
@RakshithGB No, opencv dnn module (with myriad target) supports to load .cfg and .weights directly.
Maybe you just want use openvino with opencv dnn module, but it's another question.
Hi,
Could you please provide an example of how to run yolo-v3-tiny and yolo-v3-tiny-prn version on OpenCV with OpenVino optimisation? Ideally I want to run a study on different CPUs and Integrated Intel Graphic cards and publish performance. I think this would help a lot of people, since many people want to run on CPU or the available integrated GPU in realtime.