Closed satpalsr closed 2 years ago
Hello @satpalsr,
Thank you for reaching OpenVINO!
Have you tried master branch? I've been able to successfully get the IR on the latest track without specifying the input shape(dynamic shapes) and with static shapes.
mo --input_model ~/Efficient_pose_op11.onnx --input input_1,input_2 --input_shape [1,512,512,3],[1,6]
OpenVINO runtime version: custom_master_20266dd0c309376bdefa487af936e979412d8595
Model Optimizer version: custom_master_20266dd0c309376bdefa487af936e979412d8595
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: ov/Efficient_pose_op11.xml
[ SUCCESS ] BIN file: ov/Efficient_pose_op11.bin
[ SUCCESS ] Total execution time: 148.01 seconds.
[ SUCCESS ] Memory consumed: 319 MB.
mo --input_model ~/Efficient_pose_op11.onnx
OpenVINO runtime version: custom_master_20266dd0c309376bdefa487af936e979412d8595
Model Optimizer version: custom_master_20266dd0c309376bdefa487af936e979412d8595
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /ov/Efficient_pose_op11.xml
[ SUCCESS ] BIN file: /ov/Efficient_pose_op11.bin
[ SUCCESS ] Total execution time: 4.62
Hey @andrei-kochin , can you please list the installation steps from the master branch for the windows 10 platform?
Hi @satpalsr,
Please find the build instructions for Windows here: https://github.com/openvinotoolkit/openvino/wiki/BuildingForWindows
You can also try the latest dev package to run on CPU/GPU: https://pypi.org/project/openvino-dev/2022.1.0.dev20220215/
I converted your model and ran with benchmark_app on CPU.
mo --input_model Efficient_pose_op11.onnx --input input_1,input_2 --input_shape [1,512,512,3],[1,6]
OpenVINO runtime version: 2022.1.0-6682-121d59aa80a
Model Optimizer version: 2022.1.0-6682-121d59aa80a
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: C:\Users\jgespino\Downloads\gh10582\Efficient_pose_op11.xml
[ SUCCESS ] BIN file: C:\Users\jgespino\Downloads\gh10582\Efficient_pose_op11.bin
[ SUCCESS ] Total execution time: 4.87 seconds.
Regards, Jesus
Hey @jgespino. Thanks for the installation steps. I was able to complete it with several warning & following errors:
"D:\openvino\build\src\plugins\intel_gpu\src\runtime\openvino_intel_gpu_runtime.vcxproj" (default target) (65) -
>
(ClCompile target) ->
D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/engine.hpp(22,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/engine.hpp(22,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu\runtime\memory.hpp(13,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu\runtime\memory.hpp(13,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/memory.hpp(13,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/memory.hpp(13,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu\runtime\memory.hpp(13,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/engine.hpp(22,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/engine.hpp(22,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/engine.hpp(22,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/memory.hpp(13,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/memory.hpp(13,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/memory.hpp(13,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
D:\openvino\src\plugins\intel_gpu\include\intel_gpu\runtime\memory.hpp(13,10): fatal error C1083: Cannot open
include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
time\openvino_intel_gpu_runtime.vcxproj]
6062 Warning(s)
14 Error(s)
I tried converting onnx to IR & it worked.
OpenVINO runtime version: 2022.1.0-6682-121d59aa80a
Model Optimizer version: 2022.1.0-6682-121d59aa80a
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: D:\openvino\tools\mo\openvino\tools\mo\Efficient_pose_op11.xml
[ SUCCESS ] BIN file: D:\openvino\tools\mo\openvino\tools\mo\Efficient_pose_op11.bin
[ SUCCESS ] Total execution time: 5.09 seconds.
However, my final goal was to get a blob file. I am unable to find "myriad_compile" or "compile_tool.exe" to do so. Previously I would have run this command
myriad_compile -ip U8 -VPU_NUMBER_OF_SHAVES 6 -VPU_NUMBER_OF_CMX_SLICES 6 -m Efficient_pose_op11.onnx -o Efficient_pose_op11.blob
@jgespino I am having problems converting compile_tool "main.cpp" into compile_tool.exe. Can you just share the file if that would solve the problem?
Hi @satpalsr,
Apologies, I didn't realize you were trying to compile the bob for MYRIAD. You can download the 2022.1.0.dev20220215 package with OpenVINO Runtime for C/C++ which includes the compile_tool.
C:\Users\user>cd Downloads\w_openvino_toolkit_windows_dev_2022.1.0.dev20220215
C:\Users\user\Downloads\w_openvino_toolkit_windows_dev_2022.1.0.dev20220215>setupvars.bat
Python 3.7.9
[setupvars.bat] OpenVINO environment initialized
C:\Users\user\Downloads\w_openvino_toolkit_windows_dev_2022.1.0.dev20220215>cd tools\compile_tool
C:\Users\user\Downloads\w_openvino_toolkit_windows_dev_2022.1.0.dev20220215\tools\compile_tool>compile_tool.exe -h
OpenVINO Runtime version ......... 2022.1.0
Build ........... 2022.1.0-6682-121d59aa80a
compile_tool [OPTIONS]
Common options:
-h Optional. Print the usage message.
-m <value> Required. Path to the XML model.
-d <value> Required. Specify a target device for which executable network will be compiled.
Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin.
Use "-d MULTI:<comma-separated_devices_list>" format to specify MULTI plugin.
The application looks for a suitable plugin for the specified device.
-o <value> Optional. Path to the output file. Default value: "<model_xml_file>.blob".
-c <value> Optional. Path to the configuration file.
-ip <value> Optional. Specifies precision for all input layers of the network.
-op <value> Optional. Specifies precision for all output layers of the network.
-iop "<value>" Optional. Specifies precision for input and output layers by name.
Example: -iop "input:FP16, output:FP16".
Notice that quotes are required.
Overwrites precision from ip and op options for specified layers.
-il <value> Optional. Specifies layout for all input layers of the network.
-ol <value> Optional. Specifies layout for all output layers of the network.
-iol "<value>" Optional. Specifies layout for input and output layers by name.
Example: -iol "input:NCHW, output:NHWC".
Notice that quotes are required.
Overwrites layout from il and ol options for specified layers.
-iml <value> Optional. Specifies model layout for all input layers of the network.
-oml <value> Optional. Specifies model layout for all output layers of the network.
-ioml "<value>" Optional. Specifies model layout for input and output tensors by name.
Example: -ionl "input:NCHW, output:NHWC".
Notice that quotes are required.
Overwrites layout from il and ol options for specified layers.
-ov_api_1_0 Optional. Compile model to legacy format for usage in Inference Engine API,
by default compiles to OV 2.0 API
MYRIAD-specific options:
-VPU_NUMBER_OF_SHAVES <value> Optional. Specifies number of shaves.
Should be set with "VPU_NUMBER_OF_CMX_SLICES".
Overwrites value from config.
-VPU_NUMBER_OF_CMX_SLICES <value> Optional. Specifies number of CMX slices.
Should be set with "VPU_NUMBER_OF_SHAVES".
Overwrites value from config.
-VPU_TILING_CMX_LIMIT_KB <value> Optional. Specifies CMX limit for data tiling.
Value should be equal or greater than -1.
Overwrites value from config.
Let me know if that works for you.
Regards, Jesus
@jgespino Doesn't work. Check this:
D:\openvino_compile\w_openvino_toolkit_windows_dev_2022.1.0.dev20220215\tools\compile_tool>compile_tool -ip U8 -d MYRIAD -VPU_NUMBER_OF_SHAVES 6 -VPU_NUMBER_OF_CMX_SLICES 6 -m Efficient_pose_op11.xml -o Efficient_pose_op11.blob
OpenVINO Runtime version ......... 2022.1.0
Build ........... 2022.1.0-6682-121d59aa80a
Network inputs:
input_1 : u8 / [...]
input_2 : u8 / [...]
Network outputs:
filtered_detections/sink_port_6 : f16 / [...]
filtered_detections_1/sink_port_5 : f16 / [...]
filtered_detections_2/sink_port_4 : i32 / [...]
filtered_detections_3/sink_port_3 : f16 / [...]
filtered_detections_4/sink_port_2 : f16 / [...]
Function contains several inputs and outputs with one friendly name: Loop_30345
Here is the drive link for all model files.
Hi @satpalsr
I took a look at the model and it contains layers that are not supported by the MYRIAD Plugin such as loop op. We have a similar model available that works on MYRIAD on the Open Model Zoo.
Please take a look at the following and let me know if it meets your project requirements. https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/human-pose-estimation-3d-0001
Regards, Jesus
Hey @jgespino, thanks for all the help. I'll look for alternatives to get the project done. Thank you.
System information (version)
Detailed description
I want to convert the Efficient pose onnx into the blob, but receives these errors with the following commands:
Error:
Error
Extra info
The shared onnx was converted using tf2onnx with opset 11. I have also converted the model with opset 14 (access onnx here) & here are the error messages for it. Very similar.
A part of error:
I have tried following steps as suggested in the issue but it takes forever to install. Also, I am on the Windows 10 platform.
Steps to reproduce