xboot / libonnx

A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.
MIT License
575 stars 107 forks source link

Tensorflow model with opset 12 seems to crash when loaded #12

Closed Planet-Patrick closed 3 years ago

Planet-Patrick commented 3 years ago

I have a model converted from Tensorflow that uses opset 12. (using tf2onnx.convert) The model opens fine in Netron and elsewhere but crashes somewhere in Concat_reshape when I try to load it with onnx_context_alloc_from_file. I tried compiling for both x86 and x64 with the same result.

Here are the model properties as viewed through Netron: image

Opening the models supplied in the libonnx test directory seemed to work fine. Do you have any suggestions for how to get this working? Thanks.

jianjunjiang commented 3 years ago

Please check this documents, https://github.com/xboot/libonnx/blob/master/documents/the-supported-operator-table.md

Planet-Patrick commented 3 years ago

@jianjunjiang Thanks. My model does indeed contain LSTM which is not supported on there. I thought that libonnx supported all of opset 14 from the readme but now I take it that it's not the case. Is that correct? So libonnx in fact only supports a subset of opset 14?

Planet-Patrick commented 3 years ago

@jianjunjiang Actually it looks like there's no actual LSTM in the exported model but instead constituents like Loop which i see is also not supported.

Planet-Patrick commented 3 years ago

@jianjunjiang Is there an easy way to see when libonnx hits an unsupported operator? I'm considering finding all of the missing ones and trying to add them myself.

jianjunjiang commented 3 years ago

In this document If the operator is not checked, it means it is not supported. Thanks for your research。 the-supported-operator-table