onnx / tensorflow-onnx

Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
Apache License 2.0
2.33k stars 432 forks source link

[tflite] `TFL_VAR_HANDLE` and `TFL_READ_VARIABLE` are not supported #2059

Open josephrocca opened 2 years ago

josephrocca commented 2 years ago

New Operator

The TFL_VAR_HANDLE and TFL_READ_VARIABLE operators are used in Lyra's "soundstream encoder" model and "lyragan" model. I don't know the specifics of what these operators are for other than what is implied by their name, and I probably can't contribute them. Here are the two models:

And here's a minimal reproduction of the errors:

https://colab.research.google.com/gist/josephrocca/5af909bd240264cdecd4598903be8dfa

Note that the above Colab uses this patch (the only change is here) of tf2onnx to hackily avoid this problem.

Here's the full outputs of the conversion commands:

soundstream_encoder.tflite ``` /usr/lib/python3.7/runpy.py:125: RuntimeWarning: 'tf2onnx.convert' found in sys.modules after import of package 'tf2onnx', but prior to execution of 'tf2onnx.convert'; this may result in unpredictable behaviour warn(RuntimeWarning(msg)) 2022-10-12 07:37:31,601 - INFO - tf2onnx: inputs: None 2022-10-12 07:37:31,601 - INFO - tf2onnx: outputs: None 2022-10-12 07:37:32,743 - INFO - tf2onnx.tfonnx: Using tensorflow=2.9.2, onnx=1.12.0, tf2onnx=1.12.0/7e0144 2022-10-12 07:37:32,743 - INFO - tf2onnx.tfonnx: Using opset INFO: Created TensorFlow Lite XNNPACK delegate for CPU. 2022-10-12 07:37:32,829 - VERBOSE - tf2onnx.tfonnx: Mapping TF node to ONNX node(s) 2022-10-12 07:37:32,834 - VERBOSE - tf2onnx.tfonnx: Mapping TF node to ONNX node(s) 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [first_layerconv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_2/simpleconv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_2/resnet_2adepthwise_conv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_2/resnet_1adepthwise_conv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_2/resnet_0aconv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_1/simpleconv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_1/resnet_2adepthwise_conv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_1/resnet_1adepthwise_conv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_1/resnet_0aconv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_0/simpleconv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_0/resnet_2adepthwise_conv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_0/resnet_1adepthwise_conv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_0/resnet_0aconv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,834 - ERROR - tf2onnx.tfonnx: Tensorflow op [bottleneck_1/simpleconv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,835 - ERROR - tf2onnx.tfonnx: Unsupported ops: Counter({'TFL_VAR_HANDLE': 14}) 2022-10-12 07:37:32,835 - VERBOSE - tf2onnx.tfonnx: Summay Stats: tensorflow ops: Counter({'TFL_VAR_HANDLE': 14, 'Const': 12}) tensorflow attr: Counter({'container': 14, 'shared_name': 14, 'value': 12}) onnx mapped: Counter({'Const': 12}) onnx unmapped: Counter({'TFL_VAR_HANDLE': 14}) 2022-10-12 07:37:32,840 - VERBOSE - tf2onnx.tfonnx: Mapping TF node to ONNX node(s) 2022-10-12 07:37:32,922 - VERBOSE - tf2onnx.tfonnx: Mapping TF node to ONNX node(s) 2022-10-12 07:37:32,923 - ERROR - tf2onnx.tfonnx: Tensorflow op [first_layerconv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,923 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/first_layerconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,925 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_2/simpleconv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,925 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/encoder_2/simpleconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,925 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_2/resnet_2adepthwise_conv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,925 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/encoder_2/resnet_2adepthwise_conv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_2/resnet_1adepthwise_conv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/encoder_2/resnet_1adepthwise_conv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_2/resnet_0aconv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/encoder_2/resnet_0aconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_1/simpleconv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/encoder_1/simpleconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_1/resnet_2adepthwise_conv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/encoder_1/resnet_2adepthwise_conv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_1/resnet_1adepthwise_conv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/encoder_1/resnet_1adepthwise_conv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_1/resnet_0aconv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/encoder_1/resnet_0aconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_0/simpleconv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/encoder_0/simpleconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,926 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_0/resnet_2adepthwise_conv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,927 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/encoder_0/resnet_2adepthwise_conv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,927 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_0/resnet_1adepthwise_conv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,927 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/encoder_0/resnet_1adepthwise_conv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,927 - ERROR - tf2onnx.tfonnx: Tensorflow op [encoder_0/resnet_0aconv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,927 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/encoder_0/resnet_0aconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,927 - ERROR - tf2onnx.tfonnx: Tensorflow op [bottleneck_1/simpleconv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 07:37:32,927 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/bottleneck_1/simpleconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 07:37:32,988 - ERROR - tf2onnx.tfonnx: Unsupported ops: Counter({'TFL_VAR_HANDLE': 14, 'TFL_READ_VARIABLE': 14}) 2022-10-12 07:37:32,999 - VERBOSE - tf2onnx.tfonnx: Summay Stats: tensorflow ops: Counter({'Const': 86, 'TFL_RESHAPE': 47, 'TFL_CONV_2D': 23, 'TFL_LEAKY_RELU': 22, 'TFL_VAR_HANDLE': 14, 'TFL_READ_VARIABLE': 14, 'TFL_CONCATENATION': 14, 'TFL_STRIDED_SLICE': 14, 'TFL_DEPTHWISE_CONV_2D': 9, 'TFL_ADD': 9, 'Placeholder': 1, 'Identity': 1}) tensorflow attr: Counter({'value': 86, 'fused_activation_function': 55, 'dilation_h_factor': 32, 'dilation_w_factor': 32, 'padding': 32, 'stride_h': 32, 'stride_w': 32, 'alpha': 22, 'container': 14, 'shared_name': 14, 'axis': 14, 'begin_mask': 14, 'ellipsis_mask': 14, 'end_mask': 14, 'new_axis_mask': 14, 'shrink_axis_mask': 14, 'depth_multiplier': 9, 'pot_scale_int16': 9}) onnx mapped: Counter({'Const': 79, 'Reshape': 47, 'Conv2D': 23, 'LeakyRelu': 22, 'TFL_CONCATENATION': 14, 'DepthwiseConv2dNative': 9, 'Add': 9, 'Placeholder': 1}) onnx unmapped: Counter({'TFL_VAR_HANDLE': 14, 'TFL_READ_VARIABLE': 14}) 2022-10-12 07:37:32,999 - INFO - tf2onnx.optimizer: Optimizing ONNX model 2022-10-12 07:37:33,000 - VERBOSE - tf2onnx.optimizer: Apply optimize_transpose 2022-10-12 07:37:33,074 - VERBOSE - tf2onnx.optimizer.TransposeOptimizer: Const +11 (88->99), Reshape +11 (56->67), Transpose -29 (128->99) 2022-10-12 07:37:33,074 - VERBOSE - tf2onnx.optimizer: Apply remove_redundant_upsample 2022-10-12 07:37:33,100 - VERBOSE - tf2onnx.optimizer.UpsampleOptimizer: no change 2022-10-12 07:37:33,100 - VERBOSE - tf2onnx.optimizer: Apply fold_constants 2022-10-12 07:37:33,160 - VERBOSE - tf2onnx.optimizer.ConstFoldOptimizer: Cast -47 (47->0), Const +9 (99->108), Reshape -17 (67->50), Transpose -56 (99->43) 2022-10-12 07:37:33,161 - VERBOSE - tf2onnx.optimizer: Apply const_dequantize_optimizer 2022-10-12 07:37:33,181 - VERBOSE - tf2onnx.optimizer.ConstDequantizeOptimizer: no change 2022-10-12 07:37:33,181 - VERBOSE - tf2onnx.optimizer: Apply loop_optimizer 2022-10-12 07:37:33,200 - VERBOSE - tf2onnx.optimizer.LoopOptimizer: no change 2022-10-12 07:37:33,200 - VERBOSE - tf2onnx.optimizer: Apply merge_duplication 2022-10-12 07:37:33,230 - VERBOSE - tf2onnx.optimizer.MergeDuplicatedNodesOptimizer: Const -26 (108->82) 2022-10-12 07:37:33,230 - VERBOSE - tf2onnx.optimizer: Apply reshape_optimizer 2022-10-12 07:37:33,257 - VERBOSE - tf2onnx.optimizer.ReshapeOptimizer: no change 2022-10-12 07:37:33,257 - VERBOSE - tf2onnx.optimizer: Apply global_pool_optimizer 2022-10-12 07:37:33,285 - VERBOSE - tf2onnx.optimizer.GlobalPoolOptimizer: no change 2022-10-12 07:37:33,286 - VERBOSE - tf2onnx.optimizer: Apply q_dq_optimizer 2022-10-12 07:37:33,304 - VERBOSE - tf2onnx.optimizer.QDQOptimizer: no change 2022-10-12 07:37:33,304 - VERBOSE - tf2onnx.optimizer: Apply remove_identity 2022-10-12 07:37:33,323 - VERBOSE - tf2onnx.optimizer.IdentityOptimizer: Identity -1 (1->0) 2022-10-12 07:37:33,323 - VERBOSE - tf2onnx.optimizer: Apply remove_back_to_back 2022-10-12 07:37:33,342 - VERBOSE - tf2onnx.optimizer.BackToBackOptimizer: Const -3 (82->79), Reshape -3 (50->47) 2022-10-12 07:37:33,342 - VERBOSE - tf2onnx.optimizer: Apply einsum_optimizer 2022-10-12 07:37:33,360 - VERBOSE - tf2onnx.optimizer.EinsumOptimizer: no change 2022-10-12 07:37:33,360 - VERBOSE - tf2onnx.optimizer: Apply optimize_transpose 2022-10-12 07:37:33,381 - VERBOSE - tf2onnx.optimizer.TransposeOptimizer: no change 2022-10-12 07:37:33,381 - VERBOSE - tf2onnx.optimizer: Apply remove_redundant_upsample 2022-10-12 07:37:33,399 - VERBOSE - tf2onnx.optimizer.UpsampleOptimizer: no change 2022-10-12 07:37:33,399 - VERBOSE - tf2onnx.optimizer: Apply fold_constants 2022-10-12 07:37:33,417 - VERBOSE - tf2onnx.optimizer.ConstFoldOptimizer: no change 2022-10-12 07:37:33,417 - VERBOSE - tf2onnx.optimizer: Apply const_dequantize_optimizer 2022-10-12 07:37:33,435 - VERBOSE - tf2onnx.optimizer.ConstDequantizeOptimizer: no change 2022-10-12 07:37:33,435 - VERBOSE - tf2onnx.optimizer: Apply loop_optimizer 2022-10-12 07:37:33,453 - VERBOSE - tf2onnx.optimizer.LoopOptimizer: no change 2022-10-12 07:37:33,453 - VERBOSE - tf2onnx.optimizer: Apply merge_duplication 2022-10-12 07:37:33,473 - VERBOSE - tf2onnx.optimizer.MergeDuplicatedNodesOptimizer: no change 2022-10-12 07:37:33,473 - VERBOSE - tf2onnx.optimizer: Apply reshape_optimizer 2022-10-12 07:37:33,490 - VERBOSE - tf2onnx.optimizer.ReshapeOptimizer: no change 2022-10-12 07:37:33,490 - VERBOSE - tf2onnx.optimizer: Apply global_pool_optimizer 2022-10-12 07:37:33,507 - VERBOSE - tf2onnx.optimizer.GlobalPoolOptimizer: no change 2022-10-12 07:37:33,507 - VERBOSE - tf2onnx.optimizer: Apply q_dq_optimizer 2022-10-12 07:37:33,524 - VERBOSE - tf2onnx.optimizer.QDQOptimizer: no change 2022-10-12 07:37:33,524 - VERBOSE - tf2onnx.optimizer: Apply remove_identity 2022-10-12 07:37:33,540 - VERBOSE - tf2onnx.optimizer.IdentityOptimizer: no change 2022-10-12 07:37:33,540 - VERBOSE - tf2onnx.optimizer: Apply remove_back_to_back 2022-10-12 07:37:33,559 - VERBOSE - tf2onnx.optimizer.BackToBackOptimizer: no change 2022-10-12 07:37:33,559 - VERBOSE - tf2onnx.optimizer: Apply einsum_optimizer 2022-10-12 07:37:33,579 - VERBOSE - tf2onnx.optimizer.EinsumOptimizer: no change 2022-10-12 07:37:33,582 - INFO - tf2onnx.optimizer: After optimization: Cast -47 (47->0), Const -9 (88->79), Identity -1 (1->0), Reshape -9 (56->47), Transpose -85 (128->43) 2022-10-12 07:37:33,596 - INFO - tf2onnx: 2022-10-12 07:37:33,596 - INFO - tf2onnx: Successfully converted TensorFlow model soundstream_encoder.tflite to ONNX 2022-10-12 07:37:33,596 - INFO - tf2onnx: Model inputs: ['serving_default_input_audio:0'] 2022-10-12 07:37:33,597 - INFO - tf2onnx: Model outputs: ['StatefulPartitionedCall:0'] 2022-10-12 07:37:33,597 - INFO - tf2onnx: ONNX model is saved at soundstream_encoder.onnx ```
lyragan.tflite ``` /usr/lib/python3.7/runpy.py:125: RuntimeWarning: 'tf2onnx.convert' found in sys.modules after import of package 'tf2onnx', but prior to execution of 'tf2onnx.convert'; this may result in unpredictable behaviour warn(RuntimeWarning(msg)) 2022-10-12 08:00:14,602 - INFO - tf2onnx: inputs: None 2022-10-12 08:00:14,603 - INFO - tf2onnx: outputs: None 2022-10-12 08:00:15,636 - INFO - tf2onnx.tfonnx: Using tensorflow=2.9.2, onnx=1.12.0, tf2onnx=1.12.0/7e0144 2022-10-12 08:00:15,636 - INFO - tf2onnx.tfonnx: Using opset INFO: Created TensorFlow Lite XNNPACK delegate for CPU. 2022-10-12 08:00:15,741 - VERBOSE - tf2onnx.tfonnx: Mapping TF node to ONNX node(s) 2022-10-12 08:00:15,747 - VERBOSE - tf2onnx.tfonnx: Mapping TF node to ONNX node(s) 2022-10-12 08:00:15,747 - ERROR - tf2onnx.tfonnx: Tensorflow op [last_layer/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,747 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_2/simple/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,747 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_2/resnet_2adepthwise_conv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,747 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_2/resnet_1adepthwise_conv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,747 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_2/resnet_0aconv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,747 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_1/simple_g1/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,747 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_1/simple_g0/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,748 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_1/resnet_2adepthwise_conv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,748 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_1/resnet_1adepthwise_conv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,748 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_1/resnet_0aconv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,748 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/simple_g3/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,748 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/simple_g2/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,748 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/simple_g1/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,748 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/simple_g0/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,748 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/resnet_2adepthwise_conv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,748 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/resnet_1adepthwise_conv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,748 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/resnet_0aconv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,748 - ERROR - tf2onnx.tfonnx: Tensorflow op [bottleneck_2/simpleconv/states1: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,748 - ERROR - tf2onnx.tfonnx: Unsupported ops: Counter({'TFL_VAR_HANDLE': 18}) 2022-10-12 08:00:15,749 - VERBOSE - tf2onnx.tfonnx: Summay Stats: tensorflow ops: Counter({'TFL_VAR_HANDLE': 18, 'Const': 11}) tensorflow attr: Counter({'container': 18, 'shared_name': 18, 'value': 11}) onnx mapped: Counter({'Const': 11}) onnx unmapped: Counter({'TFL_VAR_HANDLE': 18}) 2022-10-12 08:00:15,756 - VERBOSE - tf2onnx.tfonnx: Mapping TF node to ONNX node(s) 2022-10-12 08:00:15,871 - VERBOSE - tf2onnx.tfonnx: Mapping TF node to ONNX node(s) 2022-10-12 08:00:15,872 - ERROR - tf2onnx.tfonnx: Tensorflow op [last_layer/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,872 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/last_layer/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_2/simple/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_2/simple/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_2/resnet_2adepthwise_conv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_2/resnet_2adepthwise_conv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_2/resnet_1adepthwise_conv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_2/resnet_1adepthwise_conv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_2/resnet_0aconv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_2/resnet_0aconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_1/simple_g1/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_1/simple_g1/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_1/simple_g0/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_1/simple_g0/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,873 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_1/resnet_2adepthwise_conv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_1/resnet_2adepthwise_conv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_1/resnet_1adepthwise_conv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_1/resnet_1adepthwise_conv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_1/resnet_0aconv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_1/resnet_0aconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/simple_g3/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_0/simple_g3/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/simple_g2/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_0/simple_g2/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/simple_g1/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_0/simple_g1/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/simple_g0/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,874 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_0/simple_g0/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,875 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/resnet_2adepthwise_conv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,875 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_0/resnet_2adepthwise_conv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,875 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/resnet_1adepthwise_conv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,875 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_0/resnet_1adepthwise_conv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,875 - ERROR - tf2onnx.tfonnx: Tensorflow op [decoder_0/resnet_0aconv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,875 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/decoder_0/resnet_0aconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,875 - ERROR - tf2onnx.tfonnx: Tensorflow op [bottleneck_2/simpleconv/states: TFL_VAR_HANDLE] is not supported 2022-10-12 08:00:15,875 - ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_14/bottleneck_2/simpleconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported 2022-10-12 08:00:15,939 - ERROR - tf2onnx.tfonnx: Unsupported ops: Counter({'TFL_VAR_HANDLE': 18, 'TFL_READ_VARIABLE': 18}) 2022-10-12 08:00:15,954 - VERBOSE - tf2onnx.tfonnx: Summay Stats: tensorflow ops: Counter({'Const': 110, 'TFL_RESHAPE': 55, 'TFL_STRIDED_SLICE': 26, 'TFL_LEAKY_RELU': 22, 'TFL_CONCATENATION': 20, 'TFL_CONV_2D': 19, 'TFL_VAR_HANDLE': 18, 'TFL_READ_VARIABLE': 18, 'TFL_ADD': 17, 'TFL_DEPTHWISE_CONV_2D': 9, 'TFL_TRANSPOSE_CONV': 8, 'TFL_SUB': 8, 'TFL_SPLIT': 2, 'Placeholder': 1, 'Identity': 1}) tensorflow attr: Counter({'value': 110, 'fused_activation_function': 73, 'padding': 36, 'stride_h': 36, 'stride_w': 36, 'dilation_h_factor': 28, 'dilation_w_factor': 28, 'begin_mask': 26, 'ellipsis_mask': 26, 'end_mask': 26, 'new_axis_mask': 26, 'shrink_axis_mask': 26, 'pot_scale_int16': 25, 'alpha': 22, 'axis': 20, 'container': 18, 'shared_name': 18, 'depth_multiplier': 9, 'num_splits': 2}) onnx mapped: Counter({'Const': 105, 'Reshape': 55, 'LeakyRelu': 22, 'TFL_CONCATENATION': 20, 'Conv2D': 19, 'Add': 17, 'DepthwiseConv2dNative': 9, 'Conv2DBackpropInput': 8, 'StridedSlice': 8, 'Split': 2, 'Placeholder': 1}) onnx unmapped: Counter({'TFL_VAR_HANDLE': 18, 'TFL_READ_VARIABLE': 18}) 2022-10-12 08:00:15,954 - INFO - tf2onnx.optimizer: Optimizing ONNX model 2022-10-12 08:00:15,955 - VERBOSE - tf2onnx.optimizer: Apply optimize_transpose 2022-10-12 08:00:16,021 - VERBOSE - tf2onnx.optimizer.TransposeOptimizer: Const +5 (138->143), Reshape +16 (64->80), Transpose -34 (144->110) 2022-10-12 08:00:16,021 - VERBOSE - tf2onnx.optimizer: Apply remove_redundant_upsample 2022-10-12 08:00:16,051 - VERBOSE - tf2onnx.optimizer.UpsampleOptimizer: no change 2022-10-12 08:00:16,051 - VERBOSE - tf2onnx.optimizer: Apply fold_constants 2022-10-12 08:00:16,126 - VERBOSE - tf2onnx.optimizer.ConstFoldOptimizer: Cast -55 (55->0), Const +12 (143->155), Reshape -19 (80->61), Transpose -62 (110->48) 2022-10-12 08:00:16,126 - VERBOSE - tf2onnx.optimizer: Apply const_dequantize_optimizer 2022-10-12 08:00:16,150 - VERBOSE - tf2onnx.optimizer.ConstDequantizeOptimizer: no change 2022-10-12 08:00:16,150 - VERBOSE - tf2onnx.optimizer: Apply loop_optimizer 2022-10-12 08:00:16,176 - VERBOSE - tf2onnx.optimizer.LoopOptimizer: no change 2022-10-12 08:00:16,176 - VERBOSE - tf2onnx.optimizer: Apply merge_duplication 2022-10-12 08:00:16,207 - VERBOSE - tf2onnx.optimizer.MergeDuplicatedNodesOptimizer: Const -52 (155->103) 2022-10-12 08:00:16,208 - VERBOSE - tf2onnx.optimizer: Apply reshape_optimizer 2022-10-12 08:00:16,233 - VERBOSE - tf2onnx.optimizer.ReshapeOptimizer: no change 2022-10-12 08:00:16,233 - VERBOSE - tf2onnx.optimizer: Apply global_pool_optimizer 2022-10-12 08:00:16,253 - VERBOSE - tf2onnx.optimizer.GlobalPoolOptimizer: no change 2022-10-12 08:00:16,253 - VERBOSE - tf2onnx.optimizer: Apply q_dq_optimizer 2022-10-12 08:00:16,274 - VERBOSE - tf2onnx.optimizer.QDQOptimizer: no change 2022-10-12 08:00:16,274 - VERBOSE - tf2onnx.optimizer: Apply remove_identity 2022-10-12 08:00:16,295 - VERBOSE - tf2onnx.optimizer.IdentityOptimizer: Identity -1 (1->0) 2022-10-12 08:00:16,295 - VERBOSE - tf2onnx.optimizer: Apply remove_back_to_back 2022-10-12 08:00:16,317 - VERBOSE - tf2onnx.optimizer.BackToBackOptimizer: Const -3 (103->100), Reshape -6 (61->55) 2022-10-12 08:00:16,317 - VERBOSE - tf2onnx.optimizer: Apply einsum_optimizer 2022-10-12 08:00:16,338 - VERBOSE - tf2onnx.optimizer.EinsumOptimizer: no change 2022-10-12 08:00:16,338 - VERBOSE - tf2onnx.optimizer: Apply optimize_transpose 2022-10-12 08:00:16,361 - VERBOSE - tf2onnx.optimizer.TransposeOptimizer: no change 2022-10-12 08:00:16,362 - VERBOSE - tf2onnx.optimizer: Apply remove_redundant_upsample 2022-10-12 08:00:16,383 - VERBOSE - tf2onnx.optimizer.UpsampleOptimizer: no change 2022-10-12 08:00:16,383 - VERBOSE - tf2onnx.optimizer: Apply fold_constants 2022-10-12 08:00:16,405 - VERBOSE - tf2onnx.optimizer.ConstFoldOptimizer: no change 2022-10-12 08:00:16,405 - VERBOSE - tf2onnx.optimizer: Apply const_dequantize_optimizer 2022-10-12 08:00:16,426 - VERBOSE - tf2onnx.optimizer.ConstDequantizeOptimizer: no change 2022-10-12 08:00:16,426 - VERBOSE - tf2onnx.optimizer: Apply loop_optimizer 2022-10-12 08:00:16,446 - VERBOSE - tf2onnx.optimizer.LoopOptimizer: no change 2022-10-12 08:00:16,446 - VERBOSE - tf2onnx.optimizer: Apply merge_duplication 2022-10-12 08:00:16,470 - VERBOSE - tf2onnx.optimizer.MergeDuplicatedNodesOptimizer: no change 2022-10-12 08:00:16,470 - VERBOSE - tf2onnx.optimizer: Apply reshape_optimizer 2022-10-12 08:00:16,614 - VERBOSE - tf2onnx.optimizer.ReshapeOptimizer: no change 2022-10-12 08:00:16,614 - VERBOSE - tf2onnx.optimizer: Apply global_pool_optimizer 2022-10-12 08:00:16,634 - VERBOSE - tf2onnx.optimizer.GlobalPoolOptimizer: no change 2022-10-12 08:00:16,634 - VERBOSE - tf2onnx.optimizer: Apply q_dq_optimizer 2022-10-12 08:00:16,655 - VERBOSE - tf2onnx.optimizer.QDQOptimizer: no change 2022-10-12 08:00:16,655 - VERBOSE - tf2onnx.optimizer: Apply remove_identity 2022-10-12 08:00:16,675 - VERBOSE - tf2onnx.optimizer.IdentityOptimizer: no change 2022-10-12 08:00:16,675 - VERBOSE - tf2onnx.optimizer: Apply remove_back_to_back 2022-10-12 08:00:16,695 - VERBOSE - tf2onnx.optimizer.BackToBackOptimizer: no change 2022-10-12 08:00:16,695 - VERBOSE - tf2onnx.optimizer: Apply einsum_optimizer 2022-10-12 08:00:16,715 - VERBOSE - tf2onnx.optimizer.EinsumOptimizer: no change 2022-10-12 08:00:16,720 - INFO - tf2onnx.optimizer: After optimization: Cast -55 (55->0), Const -38 (138->100), Identity -1 (1->0), Reshape -9 (64->55), Transpose -96 (144->48) 2022-10-12 08:00:16,737 - INFO - tf2onnx: 2022-10-12 08:00:16,737 - INFO - tf2onnx: Successfully converted TensorFlow model lyragan.tflite to ONNX 2022-10-12 08:00:16,737 - INFO - tf2onnx: Model inputs: ['serving_default_input_audio:0'] 2022-10-12 08:00:16,737 - INFO - tf2onnx: Model outputs: ['StatefulPartitionedCall:0'] 2022-10-12 08:00:16,737 - INFO - tf2onnx: ONNX model is saved at lyragan.onnx ```
fatcat-z commented 2 years ago

I don't have permission to push my changes to your private branch. Please try:

Add the two lines below into tf2onnx/tflite_handlers/tfl_direct.py file after line 93

@tfl_op("TFL_READ_VARIABLE", tf_op="ReadVariableOp") @tfl_op("TFL_VAR_HANDLE", tf_op="VarHandleOp")

After this, please try your patch again.

josephrocca commented 2 years ago

@fatcat-z Thanks for your response! Do you mean after line 91? If so, I tried that and weirdly it didn't work - same error message logs as before:

https://colab.research.google.com/gist/josephrocca/5af909bd240264cdecd4598903be8dfa

...
ERROR - tf2onnx.tfonnx: Tensorflow op [first_layerconv/states1: TFL_VAR_HANDLE] is not supported
...
ERROR - tf2onnx.tfonnx: Tensorflow op [streamable_model_12/first_layerconv/concat/ReadVariableOp: TFL_READ_VARIABLE] is not supported
...

I've added you as a collaborator to that repo in case you wanted to try any changes yourself, but please feel free to suggest other things for me to try. Thanks for your help with this :pray:

fatcat-z commented 2 years ago

Yes, after line 91.

Did you run python setup.py develop to install the local tf2onnx version in you test environment?

josephrocca commented 2 years ago

@fatcat-z Oh, I was installing it like this, as you can see in the linked Colab above:

!pip install git+https://github.com/josephrocca/tensorflow-onnx.git@patch-1

but I just tried this:

!git clone --branch patch-1 https://github.com/josephrocca/tensorflow-onnx
%cd /content/tensorflow-onnx
!python setup.py develop

and the same errors occurred. I'm a bit of a Python noob (I come from the web/JS world), so please excuse my incompetence here 😬

fatcat-z commented 2 years ago

I'm using your branch to convert that tflite model to onnx and got below error in the result:

2022-10-14 14:49:15,567 - ERROR - Unsupported ops: Counter({'VarHandleOp': 14, 'ReadVariableOp': 14})

This is expected, because we didn't implement these 2 TF ops yet. I'll try to see if it could be done soon.

fatcat-z commented 2 years ago

@josephrocca ,

Please try the code in this branch, this commit. Those ops are designed for training which are not supported by tf2onnx and won't impact the inference results. Removing them should has no impact to the final inference results.

Please leverage it generate a new onnx file and see if the results are correct.

josephrocca commented 2 years ago

Thanks!! That solved it (EDIT: See below for some complications). Really appreciate how fast you managed to make this fix 🙏

There are some other down-stream issues in ORT Web preventing the model from running correctly but I think they might be specific to ORT Web, rather than this conversion process. I'll post a separate issue if I can't work out what's going wrong there.

(I'll leave it to you to re-open this if you'd like to keep this open until it's fully confirmed that these changes didn't affect the correctness of the inference results.)

fatcat-z commented 2 years ago

Thanks!! That solved it. Really appreciate how fast you managed to make this fix 🙏

There are some other down-stream issues in ORT Web preventing the model from running correctly but I think they might be specific to ORT Web, rather than this conversion process. I'll post a separate issue if I can't work out what's going wrong there.

(I'll leave it to you to re-open this if you'd like to keep this open until it's fully confirmed that these changes didn't affect the correctness of the inference results.)

Do you mind running ORT in your local machine to confirm the correctness? Anyway, please feel free to update this thread with any information you have about the correctness.

josephrocca commented 2 years ago

:+1: I've added a correctness check (and reminder to comment here) to the todo list for this project.

josephrocca commented 2 years ago

@fatcat-z An update on this: I tried to check correctness, and while the tflite file works fine, both ORT Python and ORT Web are throwing an error using the converted ONNX model. Here's a full minimal reproduction of the tflite inferene --> conversion to onnx --> onnx inference:

https://colab.research.google.com/gist/josephrocca/91e876fd90e6b7c88429258ba2384a36/onnx-runtime-python-inference.ipynb

RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. 
Name:'streamable_model_12/first_layerconv/conv1d_36/BiasAdd;streamable_model_12/first_layerconv/conv1d_36/Conv1D/Squeeze;streamable_model_12/first_layerconv/conv1d_36/BiasAdd/ReadVariableOp;Conv1D;streamable_model_12/first_layerconv/conv1d_36/Conv1D__39'
Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false.
The input tensor cannot be reshaped to the requested shape. Input shape:{1,320,1}, requested shape:{1,1,1,368}

I originally posted a question about this here: https://github.com/microsoft/onnxruntime/issues/13383 But it looks like it might be a conversion issue rather than a runtime issue, since I see a ReadVariableOp in that error message?

fatcat-z commented 2 years ago

This should be a conversion issue that some information was lost. Working on a fix.

ruihu102 commented 2 years ago

Hello, I am trying to convert a tflite float16 model into ONNX. I have tried the mentioned change to avoid the outputnames issue. But I also got ReadVariableOp error. Here is the error information:

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_2/Variable_11: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_2/Variable1: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_1/Variable_11: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_1/Variable1: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm/Variable_11: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm/Variable1: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Unsupported ops: Counter({'TFL_VAR_HANDLE': 6})
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_2/Variable_1: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [model/lstm_2/Read/ReadVariableOp: TFL_READ_VARIABLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_2/Variable: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [model/lstm_2/Read_1/ReadVariableOp: TFL_READ_VARIABLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_1/Variable_1: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [model/lstm_1/Read/ReadVariableOp: TFL_READ_VARIABLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_1/Variable: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [model/lstm_1/Read_1/ReadVariableOp: TFL_READ_VARIABLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm/Variable_1: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [model/lstm/Read/ReadVariableOp: TFL_READ_VARIABLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm/Variable: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [model/lstm/Read_1/ReadVariableOp: TFL_READ_VARIABLE] is not supported
ERROR:tf2onnx.tfonnx:Unsupported ops: Counter({'TFL_VAR_HANDLE': 6, 'TFL_READ_VARIABLE': 6})

Althrough the tflite fp16 model is converted into ONNX, it seems not expected. Any updates about the fixing? Thanks.

fatcat-z commented 2 years ago

Hello, I am trying to convert a tflite float16 model into ONNX. I have tried the mentioned change to avoid the outputnames issue. But I also got ReadVariableOp error. Here is the error information:

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_2/Variable_11: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_2/Variable1: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_1/Variable_11: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_1/Variable1: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm/Variable_11: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm/Variable1: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Unsupported ops: Counter({'TFL_VAR_HANDLE': 6})
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_2/Variable_1: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [model/lstm_2/Read/ReadVariableOp: TFL_READ_VARIABLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_2/Variable: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [model/lstm_2/Read_1/ReadVariableOp: TFL_READ_VARIABLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_1/Variable_1: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [model/lstm_1/Read/ReadVariableOp: TFL_READ_VARIABLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm_1/Variable: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [model/lstm_1/Read_1/ReadVariableOp: TFL_READ_VARIABLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm/Variable_1: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [model/lstm/Read/ReadVariableOp: TFL_READ_VARIABLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [lstm/Variable: TFL_VAR_HANDLE] is not supported
ERROR:tf2onnx.tfonnx:Tensorflow op [model/lstm/Read_1/ReadVariableOp: TFL_READ_VARIABLE] is not supported
ERROR:tf2onnx.tfonnx:Unsupported ops: Counter({'TFL_VAR_HANDLE': 6, 'TFL_READ_VARIABLE': 6})

Althrough the tflite fp16 model is converted into ONNX, it seems not expected. Any updates about the fixing? Thanks.

The solution is there and I'm working on code. The current ETA would be this week because of some unexpected things.

ruihu102 commented 2 years ago

Thank you for updating. Look forward to it.

dramaticlama commented 2 years ago

@fatcat-z An update on this: I tried to check correctness, and while the tflite file works fine, both ORT Python and ORT Web are throwing an error using the converted ONNX model. Here's a full minimal reproduction of the tflite inferene --> conversion to onnx --> onnx inference:

https://colab.research.google.com/gist/josephrocca/91e876fd90e6b7c88429258ba2384a36/onnx-runtime-python-inference.ipynb

RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. 
Name:'streamable_model_12/first_layerconv/conv1d_36/BiasAdd;streamable_model_12/first_layerconv/conv1d_36/Conv1D/Squeeze;streamable_model_12/first_layerconv/conv1d_36/BiasAdd/ReadVariableOp;Conv1D;streamable_model_12/first_layerconv/conv1d_36/Conv1D__39'
Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false.
The input tensor cannot be reshaped to the requested shape. Input shape:{1,320,1}, requested shape:{1,1,1,368}

I originally posted a question about this here: microsoft/onnxruntime#13383 But it looks like it might be a conversion issue rather than a runtime issue, since I see a ReadVariableOp in that error message?

I believe that the variable operations are not only for training but also for inference. The model is using this as memory for each inference and the value from the last inference is concatenated into the current inference. The problem in shape is most likely coming from the fact that these values [1,48,1] from the ReadVariableOp are missing. Unfortunately I think these operations will have to be supported if you want to convert your model.

fatcat-z commented 2 years ago

@josephrocca

Probably you can access the private branch for a local debug.

josephrocca commented 2 years ago

@fatcat-z That seems like it might have worked - it definitely fixed the error messages that I was getting previously.

A conversion notebook that includes an inference comparison between the original tflite model and the new onnx model: https://colab.research.google.com/gist/josephrocca/ecb5a2faf54b06eb700ecc562557c6a9/onnx-runtime-python-inference.ipynb#scrollTo=6dhtRr-ru03Q

TFLite outputs:

[[[  0.7904744   10.276169    28.19359     -0.5269828    0.5269828
   -10.803152     5.0063386   14.755524    -3.6888812   16.863457
    14.22854     28.19359    -11.593626   -19.23488      2.107932
   -10.803152     3.1618981   15.546      -22.396778    12.3841
   -13.96505    -14.228542    13.4380665   -0.5269828   -1.8444405
   -12.120609    17.39044     -5.0063386    5.0063386   11.066643
    11.857117     5.533322   -21.342813    -1.8444405   -2.3714237
     2.8984065  -11.066643     0.7904744   -0.26349163 -16.863457
    -2.3714237  -36.098335    -5.0063386   -5.796813    10.276169
    28.19359      3.6888814  -21.869797     4.2158647    3.952373
     3.4253898   -3.9523726  -10.012678     0.5269828  -10.803152
    -7.1142707   -1.053966     6.060305     5.7968135    1.5809493
     2.3714237   -2.634915     4.742847    -5.0063386 ]]]

ONNX outputs:

[[[ -0.526983     7.1142707   28.193592    -2.107932     0.526983
   -13.701559     1.3174576   12.647593    -6.8507795   16.863457
    14.755525    28.193592   -10.803152   -16.072983     2.634915
   -11.330135     3.6888812   15.282508   -22.396778    12.911084
   -13.701559   -14.228541    13.438067    -0.526983    -1.5809491
   -11.857118    17.65393     -5.0063386    4.479356    11.593626
    11.593626     5.533322   -22.133287    -2.3714237   -2.3714237
     3.4253898  -10.012677     0.2634915    0.         -17.39044
    -1.8444406  -37.415794    -5.26983     -6.8507795   11.857118
    28.193592     3.4253898  -23.714235     4.215864     5.0063386
     3.6888812   -4.215864   -10.53966      1.053966   -12.384101
    -7.641254    -0.79047453   6.3237963    5.533322     1.3174576
     3.4253898   -1.8444406    4.7428474   -4.215864  ]]]

Some of those numbers are almost exactly the same (to ~3 decimal places), while others are quite far off. Is that expected? Guessing it depends a lot on the particular model and the sorts of ops it has? I will continue to investigate this.

In any case, please feel free to close this if you are satisfied that the TFL_VAR/READ support is solved based on the above notebook outputs. Thanks for your work on this!

josephrocca commented 2 years ago

Okay, I think what might have been happening there was that I was feeding np.ones into the model, which is very "unnatural" data given that it's expecting an audio wave form, so the output is more chaotic than normal. When I feed audio into it, the output numbers are within ~20% of one another:

dramaticlama commented 2 years ago

If I understand the fix correctly it will remove some VAR_OPS and fill others with zeroes. This will fix your shape issue but it wont work with the architecture of the model. As you can see below from the snippet of the encoder, the model is using the VAR_OPS to keep track of the previous inference:

VAR_OPS

VAR_HANDLE should handle the mapping to the variable. ASSIGN_VARIABLE should copy the input to the variable. READ_VARIABLE should copy the variable to the output.

Without these the output of the model will be different from the original tensorflow model.

josephrocca commented 2 years ago

@dramaticlama Looks like you're correct!

https://github.com/google/lyra/issues/99#issuecomment-1318936519

Yes, these models are stateful indeed. There is possibility to export stateless models that return a state pointer after every call that needs to be copied over to the input for of the next call. That forces the user to handle the state manually, but that is what you need for this use-case. I am no longer at Google, so I don't have access to the exporting pipeline, but maybe there is a way to convert from one model to the other?

@fatcat-z I'm wondering if ONNX supports this sort of thing? If not, maybe the converter could just turn variable nodes into input/output nodes, or something like that?

fatcat-z commented 2 years ago

@dramaticlama Looks like you're correct!

google/lyra#99 (comment)

Yes, these models are stateful indeed. There is possibility to export stateless models that return a state pointer after every call that needs to be copied over to the input for of the next call. That forces the user to handle the state manually, but that is what you need for this use-case. I am no longer at Google, so I don't have access to the exporting pipeline, but maybe there is a way to convert from one model to the other?

@fatcat-z I'm wondering if ONNX supports this sort of thing? If not, maybe the converter could just turn variable nodes into input/output nodes, or something like that?

When I prepared this PR, I was using another version of soundstream_encoder.tflite you shared. For that one, the PR works accidentally.

@dramaticlama 's thoughts are correct so we probably finally could not find out a way to convert such model successfully.

josephrocca commented 2 years ago

@fatcat-z Ah I see. Can you see any possible path forward here to convert this into a stateless ONNX model? For example, could we turn AssignVariable nodes into output nodes and ReadVariable nodes into input nodes, and then logging the VarHandle pairings to the user as warnings during the conversion process? (see @dramaticlama's screenshot above)

So the user would have to manually pipe the output variables back into the input during the next inference according to the details of the logged VarHandles.

(I guess, ideally, all the "stateful" variables could be "bundled" into a single output/input so the model would just have to pipe that one extra output back into the one extra input.)