PeterL1n / BackgroundMattingV2

Real-Time High-Resolution Background Matting
MIT License
6.81k stars 950 forks source link

ONNX model exception #20

Closed livingbeams closed 3 years ago

livingbeams commented 3 years ago

Trying to load the included model "onnx_mobilenetv2_hd.onnx" with OpenCV we get the following exception:

cv2.error: OpenCV(4.4.0) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-52oirelq\opencv\modules\dnn\src\graph_simplifier.cpp:76: error: (-212:Parsing error) Input node with name 901 not found in function 'cv::dnn::Subgraph::getInputNodeId'

We get the same error with a ONNX model file generated with the provided script: python export_onnx.py --model-type mattingrefine --model-checkpoint "pytorch_mobilenetv2.pth" --model-backbone mobilenetv2 --model-backbone-scale 0.25 --model-refine-mode sampling --model-refine-sample-pixels 80000 --onnx-opset-version 12 --onnx-constant-folding --precision float32 --output "model_mobilenetv2.onnx" --validate

With the model "onnx_mobilenetv2_4k.onnx" we get a different error: cv2.error: OpenCV(4.4.0) C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-52oirelq\opencv\modules\dnn\include\opencv2/dnn/shape_utils.hpp:222: error: (-215:Assertion failed) clamped.start < clamped.end in function 'cv::dnn::dnn4_v20200609::clamp'

We would like to convert the model to check performance running with ONNX runtime or with OpenVino, and the first step would be to get a ONNX model that can be open with OpenCV.

PeterL1n commented 3 years ago

I never tried it with OpenCV. Why can't you direct run it in OnnxRuntime?

livingbeams commented 3 years ago

Thank for your answer.

The point is that we would like to check inference performance with OpenVino for devices without a dedicated GPU. The scripts to convert ONNX models to the IR format of OpenVino are giving this error:

[ ERROR ] Cannot infer shapes or values for node "Conv_59". [ ERROR ] Data after padding has dimension less than window size. Possible reason of error is incorrectly specified model input shape(s). [ ERROR ] [ ERROR ] It can happen due to bug in custom shape infer function <function Convolution.infer at 0x0000016D4CD394C8>. [ ERROR ] Or because the node inputs have incorrect values/shapes. [ ERROR ] Or because input shapes are incorrect (embedded to the model or passed via --input_shape). [ ERROR ] Run Model Optimizer with --log_level=DEBUG for more information. [ ERROR ] Exception occurred during running replacer "REPLACEMENT_ID" (<class 'extensions.middle.PartialInfer.PartialInfer'>): Stopped shape/value propagation at "Conv_59" n.ode. For more information please refer to Model Optimizer FAQ, question #38. (https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=38#question-38)

As a first step we have tried to load the model with OpenCV which gives the previous error.

Loading the model with onnxruntime in python works all right,

We have tried to load the model in a C++ project with the Nuget package of onnxruntime to check if inference performance is enough for us but we are stuck trying to create the tensors with the convenient shapes:

const std::array<int64_t, 4> shape = { 1, 3, 1080, 1920}; std::array<float, 1920 1080 3> values; Ort::Value tensor = Ort::Value::CreateTensor(memory_info, values.data(), values.size(), shape.data(), shape.size());

Gives a runtime exception (it is the first time we use onnxruntime so we may be making some very basic mistake)

Best regards.

PeterL1n commented 3 years ago

I am not quite familiar with onnxruntime in c++. But if python is working, then c++ should definitely work.

ONNX is experimental. There are definitely a lot of compatibility issues. There is definitely a lot more engineering that needs to be done but we can't provide all supports.

Another route is to try if you can export ONNX through the TensorFlow model, and then convert to OpenVino.

PeterL1n commented 3 years ago

I made an update to the repo. Added more compatibility options in export_onnx.py. Take a look at the comments at the top of the file.

However, it still doesn't fix the OpenCV import problem. I think it is OpenCV's problem and I can't help you with that. I hope those compatibility options can make the model works with OnnxRuntime.

livingbeams commented 3 years ago

Thank you very much for including these export options.

I will check it with all different combinations and comment back here on results in case it helps somebody else.