openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
6.85k stars 2.18k forks source link

Stage node onnx::Cast_3092 (Greater) types check error: input #0 has type S32, but one of [FP16] is expected; IR Conversion Error #16105

Closed Simardeep27 closed 1 year ago

Simardeep27 commented 1 year ago

I am trying to convert a custom onnx model to OpenVino IR. The conversion is successful, however the model gives the following error while running on MYRIAD device:

super().compile_model(model, device_name, {} if config is None else config) RuntimeError: onnx::Cast_3092 of type Greater: [ GENERAL_ERROR ] C:\j\workspace\private-ci\ie\build-windows-vs2019@3\b\repos\openvino\src\plugins\intel_myriad\graph_transformer\src\stages\eltwise.cpp:164 Stage node onnx::Cast_3092 (Greater) types check error: input #0 has type S32, but one of [FP16] is expected

I inspected the code through which this error is being generated, it is a basic comparison of two tensors whose output is of Bool type (which is the expected data type), so why does MYRIAD device expects FP16 data type for the same? Is there any method to solve this?

Wan-Intel commented 1 year ago

VPU Plugin supports FP16 model format. Please refer to Supported Model Formats in Supported Devices.

Wan-Intel commented 1 year ago

Hi Simardeep27, Just wanted to follow up and see if the issue has been resolved.

Simardeep27 commented 1 year ago

Hi @Wan-Intel, I was able to convert the model by modifying the xml file and adding Convert layers where necessary. That issue was resolved, but when I am trying to load the model on MYRIAD, the process freezes, the model fails to load for 6-7 hours. What can the reason be for this issue? Could it be due to the complexity of the model?

Wan-Intel commented 1 year ago

Hi Simardeep27, When loading the model on the CPU with Benchmark C++ Tool, the process freezes too.

ok

We'll further investigate this and update as soon as possible.

Simardeep27 commented 1 year ago

Hi @Wan-Intel, Thanks that would be really helpful. Also, I was able to remove this bad allocation error by reducing the input size. The model loads on CPU but freezes for MYRIAD

avitial commented 1 year ago

@Simardeep27 looks like you found the workaround to include a convert operations in model. Does the model (modified with convert operations) load on CPU/GPU? And does the model with smaller input size also load on GPU?

The freeze/hang of Myriad might be caused by long execution of model compilation (large amount of data), and the fact that this network may not be supported by the Myriad plugin.

avitial commented 1 year ago

Closing this, I hope previous responses were sufficient to help you proceed. Feel free to reopen and ask additional questions related to this topic.