microsoft / EdgeML

This repository provides code for machine learning algorithms for edge devices developed at Microsoft Research India.
Other
1.59k stars 370 forks source link

Error while converting .onnx model to .pb format #236

Closed mehreenjabeen closed 3 years ago

mehreenjabeen commented 3 years ago

Hi, I want to convert onnx model obtained from https://github.com/microsoft/EdgeML/tree/master/examples/pytorch/FastCells/KWS-training to .pb format. I followed this tutorial https://thenewstack.io/tutorial-import-an-onnx-model-into-tensorflow-for-inference/ but got below error: `onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for FastGRNN with domain_version of 9

==> Context: Bad node spec: input: "18" input: "rnn_list.0.cell.W" input: "rnn_list.0.cell.U" input: "rnn_list.0.cell.bias_gate" input: "rnn_list.0.cell.bias_update" input: "rnn_list.0.cell.zeta" input: "rnn_list.0.cell.nu" output: "19" op_type: "FastGRNN" attribute { name: "gate_nonlinearity" s: "sigmoid" type: STRING } attribute { name: "hidden_size" i: 128 type: INT } attribute { name: "update_nonlinearity" s: "tanh" type: STRING }`

Please help to resolve it asap.

ShikharJ commented 3 years ago

@MJ10

MJ10 commented 3 years ago

Hi @mehreenjabeen The ONNX model just defines the model. It does not contain information on how to run it. That is implemented by the runtime used for the model (ONNXRuntime is one example).

The ONNX model you generate uses a custom operator for the FastGRNN layer. For the scenario described in our README, we use ELL to compile the model for running on device. This works since FastGRNN support was added to ELL.

Running the ONNX model with Tensorflow as the runtime is not a scenario we support currently and has not been tested by us. Looking at the error message, and looking at onnx-tf, custom operators might not be supported out of the box. I would suggest opening a ticket here for help with custom operators.