Open maxlytkin opened 3 years ago
1. Firstly, I would suggest the user to install the latest OpenVINO version (2021.2) https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html?elq_cid=6757375_ts1611141610167&erpm_id=9826563_ts1611141610167
2. I see that the user is using a MNIST onnx model. The MNIST model from the onnx model zoo is usually a static model with input_shape as 1x1x28x28 and we can't do batching on a static model. I hope the user is using a dynamic onnx model with a input_shape, say Nx1x28x28. Only when using a dynamic model, we can do batch Inferencing. I downloaded the MNIST onnx model from here: https://github.com/onnx/models/tree/master/vision/classification/mnist
3. Say, The user is using a dynamic_model and from the code above, a batch_size=50 is being used which might be a little too much for a MyriadX device(Memory Limitation). I would suggest the user to use a smaller batch_size.
4. currently, OpenVINO-EP will be built as a shared_lib by default. we need to add an additional flag --build_shared_lib while building openvino-ep.
OpenVINO-EP Build Instructions Page: https://github.com/microsoft/onnxruntime/blob/master/BUILD.md#openvino
OpenVINO-EP Docx: https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/OpenVINO-ExecutionProvider.md
so, the build command would look something like this now: command: ./build.sh --config RelWithDebInfo --use_openvino MYRIAD_FP16 --build_shared_lib
After EP is built successfully and we navigate to ../onnxruntime/build/Linux/RelWithDebInfo/
Now, we will have the following shared libraries here.
libonnxruntime_providers_openvino.so
libonnxruntime_providers_shared.so
libonnxruntime.so.1.6.0 (This only gets added when you use --build_shared_lib flag while building)
5. There is no need to create a seperate Ort::MemoryInfo object specific for openvino. As you are already using the OpenVINO c++ API ( sessionOptions.AppendExecutionProvider_OpenVINO(ovOptions); ), it would be taken care of.
6. For your reference, Attaching a working cpp sample code for mnist.onnx model (static model) which I got from the model zoo. mnist_openvino_ep_sample.txt
Adding steps I used to compile on Linux (ubuntu os):
step 1: command to compile (compiling from this path: ./onnxruntime/build/Linux/RelWithDebInfo/) g++ -o run_mnist mnist_openvino_sample.cpp -I ../../../include/onnxruntime/core/session/ -L ./ -lonnxruntime_providers_openvino -lonnxruntime_providers_shared -lonnxruntime
step 2: ./run_mnist
Prior to compiling, make sure all the three shared libraries mentioned at the top are copied to this location. /usr/lib/x86_64-linux-gnu/
Here's the output from MNIST model using MYRIADX device with OpenVINO-EP:
I have extra prints in the picture, because I was running in debug mode.
Let me know, if you need any more information or if you run into issue again.
Hi. I'm creating this ticket on behalf of Stack Overflow user who reported this issue while using OpenVINO toolkit. This component (OpenVINO Execution Provider) is not part of the OpenVINO toolkit. Original thread - https://stackoverflow.com/questions/65253014/memory-corruption-when-using-onnxruntime-with-openvino-on-the-intel-myriadx-and
Describe the bug
I'm trying to run Inference on the Intel Compute Stick 2 (MyriadX chip) connected to a Raspberry Pi 4B using OnnxRuntime and OpenVINO. I have everything set up, the openvino provider gets recognized by onnxruntime and I can see the myriad in the list of available devices.
However, I always get some kind of memory corruption when trying to run inference on the myriad. I'm not sure where this is coming from. If I use the default CPU inference instead of openvino, everythin works fine. Maybe the way I'm creating the
Ort::MemoryInfo
object is incorrect.output
Here is the code I'm using