openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.23k stars 2.26k forks source link

The node has more than 1 input. Cannot infer shapes or values for node. #10582

Closed satpalsr closed 2 years ago

satpalsr commented 2 years ago
System information (version)
Detailed description

I want to convert the Efficient pose onnx into the blob, but receives these errors with the following commands:

  1. Onnx to IR conversion
    python "C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\model_optimizer\mo_onnx.py" --input_model Efficient_pose_op11.onnx --input input_1,input_2 --input_shape [1,512,512,3],[1,6] --output filtered_detections,filtered_detections_1,filtered_detections_2,filtered_detections_3,filtered_detections_4 --model_name Efficient_pose_op11 --data_type FP32 --output_dir .

    Error:

    [ ERROR ]  Cannot infer shapes or values for node "where_op_added__146".
    [ ERROR ]  Cannot infer `where_op_added__146` due to both order and reverse_order was set
    [ ERROR ]
    [ ERROR ]  It can happen due to bug in custom shape infer function <function Transpose.infer at 0x000001DB98253CA0>.
    [ ERROR ]  Or because the node inputs have incorrect values/shapes.
    [ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
    [ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
    [ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.MoveConstToLoopBody.MoveConstToLoopBody'>): Stopped shape/value propagation at "where_op_added__146" node.
    For more information please refer to Model Optimizer FAQ, question #38. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=38#question-38)
  2. Onnx to Blob conversion
    myriad_compile -ip U8 -VPU_NUMBER_OF_SHAVES 6 -VPU_NUMBER_OF_CMX_SLICES 6 -m Efficient_pose_op11.onnx -o Efficient_pose_op11.blob

    Error

    [ GENERAL_ERROR ]
    C:\j\workspace\private-ci\ie\build-windows-vs2019\b\repos\openvino\inference-engine\src\vpu\common\src\ngraph\transformations\extract_dynamic_batch\extract_dynamic_batch.cpp:217 Encountered constant StatefulPartitionedCall/efficientpose_prediction/rotation_net/rotation-2/separable_conv2d/ReadVariableOp_1:0[0]:f32{64,64,1,1} that has 2 consumers (v1::Convolution Convolution_11034 (StatefulPartitionedCall/efficientpose_prediction/rotation_net/rotation-2/separable_conv2d/depthwise:0[0]:f32{?,64,64,64}, StatefulPartitionedCall/efficientpose_prediction/rotation_net/rotation-2/separable_conv2d/ReadVariableOp_1:0[0]:f32{64,64,1,1}) -> (f32{?,64,64,64}) and v1::Convolution Convolution_7239 (StatefulPartitionedCall/efficientpose_prediction/rotation_net/rotation-2_2/separable_conv2d/depthwise:0[0]:f32{1,64,16,16}, StatefulPartitionedCall/efficientpose_prediction/rotation_net/rotation-2/separable_conv2d/ReadVariableOp_1:0[0]:f32{64,64,1,1}) -> (f32{1,64,16,16})) with different split configurations
    Extra info

    The shared onnx was converted using tf2onnx with opset 11. I have also converted the model with opset 14 (access onnx here) & here are the error messages for it. Very similar.

    python "C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\model_optimizer\mo_onnx.py" --input_model phi_0_linemod_best_ADD.onnx --input input_1,input_2 --input_shape [1,512,512,3],[1,6] --output filtered_detections,filtered_detections_1,filtered_detections_2,filtered_detections_3,filtered_detections_4 --model_name phi_0_linemod_best_ADD --data_type FP32 --output_dir .

    A part of error:

    [ ERROR ]  The ExpandDims node PadV2/paddings_Unsqueeze__256 has more than 1 input
    [ ERROR ]  Cannot infer shapes or values for node "StatefulPartitionedCall/efficientpose_prediction/translation/ExpandDims_5".
    [ ERROR ]  Wrong number of inputs to the layer StatefulPartitionedCall/efficientpose_prediction/translation/ExpandDims_5
    [ ERROR ]
    [ ERROR ]  It can happen due to bug in custom shape infer function <function ExpandDims.infer at 0x00000237C2BA2430>.
    [ ERROR ]  Or because the node inputs have incorrect values/shapes.
    [ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
    [ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
    [ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "StatefulPartitionedCall/efficientpose_prediction/translation/ExpandDims_5" node.
    For more information please refer to Model Optimizer FAQ, question #38. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=38#question-38)

I have tried following steps as suggested in the issue but it takes forever to install. Also, I am on the Windows 10 platform.

git clone https://github.com/openvinotoolkit/openvino
cd openvino/
git submodule update --init --recursive
chmod +x install_build_dependencies.sh
./install_build_dependencies.sh
pip3 install cython
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE=/usr/bin/python3.6 -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.6m.so -DPYTHON_INCLUDE_DIR=/usr/include/python3.6 -DCMAKE_INSTALL_PREFIX=/opt/intel/openvino_master ..
make --jobs=$(nproc --all)
sudo make install
source /opt/intel/openvino_master/setupvars.sh
cd ../tools/mo
sudo -H pip3 install --upgrade pip
python3 -m pip install -r requirements.txt
sudo python3 setup.py install
mo --version
Steps to reproduce
  1. Download the onnx file from description links.
  2. Follow commands listed in description.
andrei-kochin commented 2 years ago

Hello @satpalsr,

Thank you for reaching OpenVINO!

Have you tried master branch? I've been able to successfully get the IR on the latest track without specifying the input shape(dynamic shapes) and with static shapes.

mo  --input_model ~/Efficient_pose_op11.onnx  --input input_1,input_2 --input_shape [1,512,512,3],[1,6]
OpenVINO runtime version:       custom_master_20266dd0c309376bdefa487af936e979412d8595
Model Optimizer version:        custom_master_20266dd0c309376bdefa487af936e979412d8595
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: ov/Efficient_pose_op11.xml
[ SUCCESS ] BIN file: ov/Efficient_pose_op11.bin
[ SUCCESS ] Total execution time: 148.01 seconds. 
[ SUCCESS ] Memory consumed: 319 MB. 

mo --input_model ~/Efficient_pose_op11.onnx 
OpenVINO runtime version:       custom_master_20266dd0c309376bdefa487af936e979412d8595
Model Optimizer version:        custom_master_20266dd0c309376bdefa487af936e979412d8595

[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /ov/Efficient_pose_op11.xml
[ SUCCESS ] BIN file: /ov/Efficient_pose_op11.bin
[ SUCCESS ] Total execution time: 4.62 
satpalsr commented 2 years ago

Hey @andrei-kochin , can you please list the installation steps from the master branch for the windows 10 platform?

jgespino commented 2 years ago

Hi @satpalsr,

Please find the build instructions for Windows here: https://github.com/openvinotoolkit/openvino/wiki/BuildingForWindows

You can also try the latest dev package to run on CPU/GPU: https://pypi.org/project/openvino-dev/2022.1.0.dev20220215/

I converted your model and ran with benchmark_app on CPU.

mo --input_model Efficient_pose_op11.onnx --input input_1,input_2 --input_shape [1,512,512,3],[1,6]
OpenVINO runtime version:       2022.1.0-6682-121d59aa80a
Model Optimizer version:        2022.1.0-6682-121d59aa80a
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: C:\Users\jgespino\Downloads\gh10582\Efficient_pose_op11.xml
[ SUCCESS ] BIN file: C:\Users\jgespino\Downloads\gh10582\Efficient_pose_op11.bin
[ SUCCESS ] Total execution time: 4.87 seconds.

Regards, Jesus

satpalsr commented 2 years ago

Hey @jgespino. Thanks for the installation steps. I was able to complete it with several warning & following errors:

       "D:\openvino\build\src\plugins\intel_gpu\src\runtime\openvino_intel_gpu_runtime.vcxproj" (default target) (65) -
       >
       (ClCompile target) ->
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/engine.hpp(22,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/engine.hpp(22,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu\runtime\memory.hpp(13,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu\runtime\memory.hpp(13,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/memory.hpp(13,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/memory.hpp(13,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu\runtime\memory.hpp(13,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/engine.hpp(22,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/engine.hpp(22,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/engine.hpp(22,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/memory.hpp(13,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/memory.hpp(13,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu/runtime/memory.hpp(13,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]
         D:\openvino\src\plugins\intel_gpu\include\intel_gpu\runtime\memory.hpp(13,10): fatal error C1083: Cannot open
       include file: 'oneapi/dnnl/dnnl.hpp': No such file or directory [D:\openvino\build\src\plugins\intel_gpu\src\run
       time\openvino_intel_gpu_runtime.vcxproj]

    6062 Warning(s)
    14 Error(s)

I tried converting onnx to IR & it worked.

OpenVINO runtime version:       2022.1.0-6682-121d59aa80a
Model Optimizer version:        2022.1.0-6682-121d59aa80a
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: D:\openvino\tools\mo\openvino\tools\mo\Efficient_pose_op11.xml
[ SUCCESS ] BIN file: D:\openvino\tools\mo\openvino\tools\mo\Efficient_pose_op11.bin
[ SUCCESS ] Total execution time: 5.09 seconds.

However, my final goal was to get a blob file. I am unable to find "myriad_compile" or "compile_tool.exe" to do so. Previously I would have run this command

myriad_compile -ip U8 -VPU_NUMBER_OF_SHAVES 6 -VPU_NUMBER_OF_CMX_SLICES 6 -m Efficient_pose_op11.onnx -o Efficient_pose_op11.blob
satpalsr commented 2 years ago

@jgespino I am having problems converting compile_tool "main.cpp" into compile_tool.exe. Can you just share the file if that would solve the problem?

jgespino commented 2 years ago

Hi @satpalsr,

Apologies, I didn't realize you were trying to compile the bob for MYRIAD. You can download the 2022.1.0.dev20220215 package with OpenVINO Runtime for C/C++ which includes the compile_tool.

C:\Users\user>cd Downloads\w_openvino_toolkit_windows_dev_2022.1.0.dev20220215
C:\Users\user\Downloads\w_openvino_toolkit_windows_dev_2022.1.0.dev20220215>setupvars.bat
Python 3.7.9
[setupvars.bat] OpenVINO environment initialized

C:\Users\user\Downloads\w_openvino_toolkit_windows_dev_2022.1.0.dev20220215>cd tools\compile_tool
C:\Users\user\Downloads\w_openvino_toolkit_windows_dev_2022.1.0.dev20220215\tools\compile_tool>compile_tool.exe -h
OpenVINO Runtime version ......... 2022.1.0
Build ........... 2022.1.0-6682-121d59aa80a
compile_tool [OPTIONS]

 Common options:
    -h                                       Optional. Print the usage message.
    -m                           <value>     Required. Path to the XML model.
    -d                           <value>     Required. Specify a target device for which executable network will be compiled.
                                             Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin.
                                             Use "-d MULTI:<comma-separated_devices_list>" format to specify MULTI plugin.
                                             The application looks for a suitable plugin for the specified device.
    -o                           <value>     Optional. Path to the output file. Default value: "<model_xml_file>.blob".
    -c                           <value>     Optional. Path to the configuration file.
    -ip                          <value>     Optional. Specifies precision for all input layers of the network.
    -op                          <value>     Optional. Specifies precision for all output layers of the network.
    -iop                        "<value>"    Optional. Specifies precision for input and output layers by name.
                                             Example: -iop "input:FP16, output:FP16".
                                             Notice that quotes are required.
                                             Overwrites precision from ip and op options for specified layers.
    -il                          <value>     Optional. Specifies layout for all input layers of the network.
    -ol                          <value>     Optional. Specifies layout for all output layers of the network.
    -iol                        "<value>"    Optional. Specifies layout for input and output layers by name.
                                             Example: -iol "input:NCHW, output:NHWC".
                                             Notice that quotes are required.
                                             Overwrites layout from il and ol options for specified layers.
    -iml                         <value>     Optional. Specifies model layout for all input layers of the network.
    -oml                         <value>     Optional. Specifies model layout for all output layers of the network.
    -ioml                       "<value>"    Optional. Specifies model layout for input and output tensors by name.
                                             Example: -ionl "input:NCHW, output:NHWC".
                                             Notice that quotes are required.
                                             Overwrites layout from il and ol options for specified layers.
    -ov_api_1_0                              Optional. Compile model to legacy format for usage in Inference Engine API,
                                             by default compiles to OV 2.0 API

 MYRIAD-specific options:
      -VPU_NUMBER_OF_SHAVES      <value>     Optional. Specifies number of shaves.
                                             Should be set with "VPU_NUMBER_OF_CMX_SLICES".
                                             Overwrites value from config.

      -VPU_NUMBER_OF_CMX_SLICES  <value>     Optional. Specifies number of CMX slices.
                                             Should be set with "VPU_NUMBER_OF_SHAVES".
                                             Overwrites value from config.
      -VPU_TILING_CMX_LIMIT_KB   <value>     Optional. Specifies CMX limit for data tiling.
                                             Value should be equal or greater than -1.
                                             Overwrites value from config.

Let me know if that works for you.

Regards, Jesus

satpalsr commented 2 years ago

@jgespino Doesn't work. Check this:

D:\openvino_compile\w_openvino_toolkit_windows_dev_2022.1.0.dev20220215\tools\compile_tool>compile_tool -ip U8 -d MYRIAD -VPU_NUMBER_OF_SHAVES 6 -VPU_NUMBER_OF_CMX_SLICES 6 -m Efficient_pose_op11.xml -o Efficient_pose_op11.blob
OpenVINO Runtime version ......... 2022.1.0
Build ........... 2022.1.0-6682-121d59aa80a
Network inputs:
    input_1 : u8 / [...]
    input_2 : u8 / [...]
Network outputs:
    filtered_detections/sink_port_6 : f16 / [...]
    filtered_detections_1/sink_port_5 : f16 / [...]
    filtered_detections_2/sink_port_4 : i32 / [...]
    filtered_detections_3/sink_port_3 : f16 / [...]
    filtered_detections_4/sink_port_2 : f16 / [...]
Function contains several inputs and outputs with one friendly name: Loop_30345

Here is the drive link for all model files.

jgespino commented 2 years ago

Hi @satpalsr

I took a look at the model and it contains layers that are not supported by the MYRIAD Plugin such as loop op. We have a similar model available that works on MYRIAD on the Open Model Zoo.

Please take a look at the following and let me know if it meets your project requirements. https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/human-pose-estimation-3d-0001

Regards, Jesus

satpalsr commented 2 years ago

Hey @jgespino, thanks for all the help. I'll look for alternatives to get the project done. Thank you.