onnx / keras-onnx

Convert tf.keras/Keras models to ONNX
Apache License 2.0
379 stars 109 forks source link

onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Error in Node:lambda_1/Shape:0_squeeze : Unrecognized attribute: axes for operator Squeeze #712

Closed uditdixit11 closed 3 years ago

uditdixit11 commented 3 years ago

Hi All,

We are trying to convert pre-trained Keras-OCR model(crnn_kurapan.h5) into onnx for high speed inference on CPU first and then will try later in GPU. We have converted the pre-trained model recognise model(crnn_kurapan.h5) into ONNX but it throws an error in Inferencesession method i.e sess = rt.InferenceSession(content)

Below is the code used: import tensorflow as tf import numpy as np import onnxruntime import keras2onnx import keras_ocr import onnxruntime as rt from onnx import shape_inference

img_path = "ocr4.jpg" recognizer = keras_ocr.recognition.Recognizer() recognizer.compile() model = recognizer.prediction_model

onnx_model = keras2onnx.convert_keras(model, model.name) inferred_model = shape_inference.infer_shapes(onnx_model) content = inferred_model.SerializeToString() sess = rt.InferenceSession(content)

Complete Trace :

2021-05-12 14:03:38.037184: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 WARNING:tensorflow:From C:\Users\e5610521\env_ocr\lib\site-packages\tensorflow_core\python\keras\backend.py:5783: sparse_to_dense (from tensorflow.python.ops.sparse_ops) is deprecated and will be removed in a future version. Instructions for updating: Create a tf.sparse.SparseTensor and use tf.sparse.to_dense instead. Looking for C:\Users\e5610521.keras-ocr\crnn_kurapan.h5 tf executing eager_mode: True tf.keras model eager_mode: False WARN: No corresponding ONNX op matches the tf.op node decode/CTCGreedyDecoder of type CTCGreedyDecoder The generated ONNX model needs run with the custom op supports. WARN: No corresponding ONNX op matches the tf.op node decode/SparseToDense of type SparseToDense The generated ONNX model needs run with the custom op supports. The ONNX operator number change on the optimization: 279 -> 199 Warning: Unsupported operator CTCGreedyDecoder. No schema registered for this operator. Warning: Unsupported operator SparseToDense. No schema registered for this operator. Traceback (most recent call last): File "keras_to_onnx.py", line 128, in sess = rt.InferenceSession(content) File "C:\Users\e5610521\env_ocr\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 283, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\Users\e5610521\env_ocr\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 312, in _create_inference_session sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : This is an invalid model. Error in Node:lambda_1/Shape:0_squeeze : Unrecognized attribute: axes for operator Squeeze

Please Note - Made some changes in _meshgrid method of recognition.py as shown below:

x_linspace = tf.linspace(-1., 1., width) (removed these lines and used np)

#y_linspace = tf.linspace(-1., 1., height)
x_linspace = np.linspace(-1., 1., width, dtype='float32')
y_linspace = np.linspace(-1., 1., height, dtype='float32')
uditdixit11 commented 3 years ago

Taking some time to more troubleshoot and will get back to you if any issue arises