openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.12k stars 2.23k forks source link

Data batch channel count (1) does not match filter input channel count (256) #2689

Closed EnricoBeltramo closed 3 years ago

EnricoBeltramo commented 4 years ago
System information (version)
Detailed description

I successfully converted a model from pytorch to onnx to openvino, anyway when I load the model for inference, I have follow error: Check 'Dimension::merge(merged_channel_count, data_channel_count, filter_input_channel_count)' failed at ngraph/core/src/validation_util.cpp:341: While validating node 'v1::ConvolutionIE ConvolutionIE_11300 (188[0]:f32{1,256,29,29}, 185[0]:f32{1,256,5,5}) -> (dynamic?)' with friendly_name 'ConvolutionIE_11300': Data batch channel count (1) does not match filter input channel count (256). I don't understand why the issue, because both layers have same batch dimension (1)

I attach the xml of openvino model

Steps to reproduce

converted model from pytorch to onnx converted model from onnx to openvino loaded model for inference

Issue submission checklist

mobilenethead.zip

jgespino commented 4 years ago

Hi @EnricoBeltramo

Do you see the same issue with the latest OpenVINO 2021.1 release? Please provide the full Model Optimizer command used to convert the model, from the XML I don't see --input_shape or --batch was passed. Could you try specifying one of those two parameters?

Regards, Jesus

EnricoBeltramo commented 4 years ago

Hi @EnricoBeltramo

Do you see the same issue with the latest OpenVINO 2021.1 release? Please provide the full Model Optimizer command used to convert the model, from the XML I don't see --input_shape or --batch was passed. Could you try specifying one of those two parameters?

Regards, Jesus

Yes, the openvino version is version is 2021.1.

I tested different commands, anyway I fall always in similar error. I tried configuring the batch size: python mo.py --input_model '/home/ulix/Progetti/pysot/siamrpnmobilenet.onnx' --output_dir /home/ulix/Progetti/pysot/openvinosiamrpnmobilenet/ --data_type FP16 --batch 1 Model Optimizer arguments: Common parameters:

[ SUCCESS ] Generated IR version 10 model. [ SUCCESS ] XML file: /home/ulix/Progetti/pysot/openvinosiamrpnmobilenet/siamrpnmobilenet.xml [ SUCCESS ] BIN file: /home/ulix/Progetti/pysot/openvinosiamrpnmobilenet/siamrpnmobilenet.bin [ SUCCESS ] Total execution time: 26.08 seconds. [ SUCCESS ] Memory consumed: 370 MB.

and configuring all inputs shapes (the model has 2 inputs): python mo.py --input_model '/home/ulix/Progetti/pysot/siamrpnmobilenet.onnx' --output_dir /home/ulix/Progetti/pysot/openvinosiamrpnmobilenet/ --data_type FP16 --input_shape [1,3,256,7,7],[1,3,224,224] --input z,x Model Optimizer arguments: Common parameters:

[ SUCCESS ] Generated IR version 10 model. [ SUCCESS ] XML file: /home/ulix/Progetti/pysot/openvinosiamrpnmobilenet/siamrpnmobilenet.xml [ SUCCESS ] BIN file: /home/ulix/Progetti/pysot/openvinosiamrpnmobilenet/siamrpnmobilenet.bin [ SUCCESS ] Total execution time: 28.44 seconds. [ SUCCESS ] Memory consumed: 368 MB.

I attach the xml file of latest version generated siamrpnmobilenet.zip

here a link to onnx model and converted version: https://drive.google.com/file/d/1HlRnqdXd9Ziq5y8bTLTHsyziQQbCczRe/view?usp=sharing

And the error (I changed sightly the model in order to reduce the number of inputs) is: Check 'Dimension::merge(merged_channel_count, data_channel_count, filter_input_channel_count)' failed at ngraph/core/src/validation_util.cpp:341: While validating node 'v1::ConvolutionIE ConvolutionIE_16773 (1125[0]:f32{1,256,25,25}, 1122[0]:f32{1,256,5,5}) -> (dynamic?)' with friendly_name 'ConvolutionIE_16773': Data batch channel count (1) does not match filter input channel count (256).

If I convert only the backbone part (with only second input [1,3,224,224]), it works fine

EnricoBeltramo commented 4 years ago

There are some news about that? I was able to run successfully the network using ONNX runtime with Openvino provider, so I suppose the network is ok and able to run in Openvino. May be the problem is in the network conversion or how to load it in Openvino inference. I would to use native Openvino in order to manage better the Openvino optimizations.

luoyiroy commented 3 years ago

Hi @EnricoBeltramo This issue has been fixed by #3056 You can now inference the model with the master benchmark to verify it.