openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
6.8k stars 2.16k forks source link

OpenVINO does not support the following ONNX operations: com.microsoft.FusedConv [Bug] #13177

Closed PaulCahuana closed 1 year ago

PaulCahuana commented 1 year ago
System information
Detailed description

I was trying to load the model with openvino but I got the error: "RuntimeError: Check 'unknown_operators.empty()' failed at frontends/onnx/frontend/src/core/graph.cpp:133: OpenVINO does not support the following ONNX operations: com.microsoft.FusedConv"

Steps to reproduce

from openvino.runtime import Core ie = Core() model= ie.read_model(set_config_value("lm_model3_opt.onnx"))

So, do you have plans to add the "com.microsoft.FusedConv" layer?

campos537 commented 1 year ago

Hey guys, I have the same issue when trying to convert this OpenSee model using the OpenVINO Workbench.

andrei-kochin commented 1 year ago

Hello @PaulCahuana, @campos537,

Thank you for reaching OpenVINO!

This operation is not from the standard ONNX opset while from the Microsoft ONNX RT extended opset. Currently supported fused ops are listed here

We should discuss this internally how to properly proceed with that. Stay tuned!

CC @mlukasze @tomdol

Ref. 92500

PaulCahuana commented 1 year ago

Hello @PaulCahuana, @campos537,

Thank you for reaching OpenVINO!

This operation is not from the standard ONNX opset while from the Microsoft ONNX RT extended opset. Currently supported fused ops are listed here

We should discuss this internally how to properly proceed with that. Stay tuned!

CC @mlukasze @tomdol

Sure. Thanks for all!

mbencer commented 1 year ago

Hello @PaulCahuana! The support of FusedConv was added in https://github.com/openvinotoolkit/openvino/pull/13553 The model lm_model3_opt.onnx can be loaded and inference now. Tested via benchmark_app (./benchmark_app -shape [1,3,224,224] -m lm_model3_opt.onnx).

Please let me know, if the change conver all your cases and if we can close the issue.

campos537 commented 1 year ago

Wow, that is really great! This model is awesome!

Apollo9999 commented 1 year ago

Detailing Information presented as follows

Microsoft ONNX Runtime is an open source inference accelerator focused on ONNX models. It is the platform Vitis AI has integrated with to provide first-class ONNX model support, which can be exported from a wide variety of training frameworks. It incorporates very easy to use runtime APIs in Python and C++ and can support models without requiring the separate compilation phase that TVM requires. Included in ONNXRuntime is a partitioner that can automatically partition between the CPU and FPGA further enhancing the ease of model deployment. Finally, it also incorporates the Vitis AI quantizer in a way that does not require separate quantization setup.

com.microsoft.FusedConv

The fused convolution operator schema is the same as Convolution besides it includes an attribute activation.

This version of the operator has been available since version 1 of the 'com.microsoft' operator set. Attributes

activation : string activation_params : list of floats auto_pad : string dilations : list of ints group : int kernel_shape : list of ints pads : list of ints strides : list of ints

Inputs (2 - 4)

X : T W : T B (optional) : T Z (optional) : T

Outputs

Y : T

Type Constraints

T : tensor(float16), tensor(float), tensor(double) Constrain input and output types to float tensors

avitial commented 1 year ago

Closing this, as PR with support for FusedConv op has been merged. Feel free to reopen to ask any questions related to this topic.