PINTO0309 / onnx2tf

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
MIT License
662 stars 65 forks source link

Seems can not handle Concat op #4

Closed wwdok closed 1 year ago

wwdok commented 1 year ago

Issue Type

Others

Download URL for ONNX

model.zip

Description

I am trying to convert above model.onnx to tflite format by executing onnx2tf -i EyeNet.onnx, but it throw error at the concat op: image log:

Model loaded ========================================================================

Model convertion started ============================================================
INFO: input_op_name: input shape: [1, 1, 192, 192] dtype: float32

INFO: onnx_op_type: Conv onnx_op_name: Conv_0
INFO: input_name.1: input shape: [1, 1, 192, 192] dtype: float32
INFO: input_name.2: onnx::Conv_456 shape: [8, 1, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_457 shape: [8] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.4 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.compat.v1.pad/Pad:0 shape: (1, 194, 194, 1) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 1, 8) dtype: float32
INFO: input.3.bias: shape: (8,) dtype: float32
INFO: output.1.output: name: tf.math.add/Add:0 shape: (1, 96, 96, 8) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_1
INFO: input_name.1: input.4 shape: None dtype: None
INFO: output_name.1: onnx::Conv_282 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add/Add:0 shape: (1, 96, 96, 8) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu/LeakyRelu:0 shape: (1, 96, 96, 8) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_2
INFO: input_name.1: onnx::Conv_282 shape: None dtype: None
INFO: input_name.2: onnx::Conv_459 shape: [8, 1, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_460 shape: [8] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.12 shape: None dtype: None
INFO: tf_op_type: depthwise_conv2d_v2
INFO: input.1.input: name: tf.compat.v1.pad_1/Pad:0 shape: (1, 98, 98, 8) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 8, 1) dtype: <dtype: 'float32'>
INFO: input.3.bias: shape: (8,) dtype: float32
INFO: output.1.output: name: tf.math.add_1/Add:0 shape: (1, 48, 48, 8) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_3
INFO: input_name.1: input.12 shape: None dtype: None
INFO: output_name.1: onnx::Conv_285 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add_1/Add:0 shape: (1, 48, 48, 8) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu_1/LeakyRelu:0 shape: (1, 48, 48, 8) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_4
INFO: input_name.1: onnx::Conv_285 shape: None dtype: None
INFO: input_name.2: onnx::Conv_462 shape: [16, 8, 1, 1] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_463 shape: [16] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.20 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.nn.leaky_relu_1/LeakyRelu:0 shape: (1, 48, 48, 8) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (1, 1, 8, 16) dtype: float32
INFO: input.3.bias: shape: (16,) dtype: float32
INFO: output.1.output: name: tf.math.add_2/Add:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_5
INFO: input_name.1: input.20 shape: None dtype: None
INFO: output_name.1: onnx::Conv_288 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add_2/Add:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu_2/LeakyRelu:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_6
INFO: input_name.1: onnx::Conv_288 shape: None dtype: None
INFO: input_name.2: onnx::Conv_465 shape: [16, 1, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_466 shape: [16] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.28 shape: None dtype: None
INFO: tf_op_type: depthwise_conv2d_v2
INFO: input.1.input: name: tf.compat.v1.pad_2/Pad:0 shape: (1, 50, 50, 16) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 16, 1) dtype: <dtype: 'float32'>
INFO: input.3.bias: shape: (16,) dtype: float32
INFO: output.1.output: name: tf.math.add_3/Add:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_7
INFO: input_name.1: input.28 shape: None dtype: None
INFO: output_name.1: onnx::Conv_291 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add_3/Add:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu_3/LeakyRelu:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_8
INFO: input_name.1: onnx::Conv_291 shape: None dtype: None
INFO: input_name.2: onnx::Conv_468 shape: [16, 16, 1, 1] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_469 shape: [16] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.36 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.nn.leaky_relu_3/LeakyRelu:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (1, 1, 16, 16) dtype: float32
INFO: input.3.bias: shape: (16,) dtype: float32
INFO: output.1.output: name: tf.math.add_4/Add:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_9
INFO: input_name.1: input.36 shape: None dtype: None
INFO: output_name.1: onnx::Conv_294 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add_4/Add:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu_4/LeakyRelu:0 shape: (1, 48, 48, 16) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_10
INFO: input_name.1: onnx::Conv_294 shape: None dtype: None
INFO: input_name.2: onnx::Conv_471 shape: [8, 16, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_472 shape: [8] dtype: <class 'numpy.float32'>
INFO: output_name.1: onnx::Concat_470 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.compat.v1.pad_3/Pad:0 shape: (1, 50, 50, 16) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 16, 8) dtype: float32
INFO: input.3.bias: shape: (8,) dtype: float32
INFO: output.1.output: name: tf.math.add_5/Add:0 shape: (1, 48, 48, 8) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_11
INFO: input_name.1: onnx::Conv_294 shape: None dtype: None
INFO: input_name.2: onnx::Conv_474 shape: [4, 16, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_475 shape: [4] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.48 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.compat.v1.pad_4/Pad:0 shape: (1, 50, 50, 16) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 16, 4) dtype: float32
INFO: input.3.bias: shape: (4,) dtype: float32
INFO: output.1.output: name: tf.math.add_6/Add:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_12
INFO: input_name.1: input.48 shape: None dtype: None
INFO: output_name.1: onnx::Conv_299 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add_6/Add:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu_5/LeakyRelu:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_13
INFO: input_name.1: onnx::Conv_299 shape: None dtype: None
INFO: input_name.2: onnx::Conv_477 shape: [4, 4, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_478 shape: [4] dtype: <class 'numpy.float32'>
INFO: output_name.1: onnx::Concat_476 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.compat.v1.pad_5/Pad:0 shape: (1, 50, 50, 4) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 4, 4) dtype: float32
INFO: input.3.bias: shape: (4,) dtype: float32
INFO: output.1.output: name: tf.math.add_7/Add:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_14
INFO: input_name.1: onnx::Conv_299 shape: None dtype: None
INFO: input_name.2: onnx::Conv_480 shape: [4, 4, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_481 shape: [4] dtype: <class 'numpy.float32'>
INFO: output_name.1: input.60 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.compat.v1.pad_6/Pad:0 shape: (1, 50, 50, 4) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 4, 4) dtype: float32
INFO: input.3.bias: shape: (4,) dtype: float32
INFO: output.1.output: name: tf.math.add_8/Add:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>

INFO: onnx_op_type: LeakyRelu onnx_op_name: LeakyRelu_15
INFO: input_name.1: input.60 shape: None dtype: None
INFO: output_name.1: onnx::Conv_304 shape: None dtype: None
INFO: tf_op_type: leaky_relu
INFO: input.1.features: name: tf.math.add_8/Add:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>
INFO: input.2.alpha: val: 0.10000000149011612
INFO: output.1.output: name: tf.nn.leaky_relu_6/LeakyRelu:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Conv onnx_op_name: Conv_16
INFO: input_name.1: onnx::Conv_304 shape: None dtype: None
INFO: input_name.2: onnx::Conv_483 shape: [4, 4, 3, 3] dtype: <class 'numpy.float32'>
INFO: input_name.3: onnx::Conv_484 shape: [4] dtype: <class 'numpy.float32'>
INFO: output_name.1: onnx::Concat_482 shape: None dtype: None
INFO: tf_op_type: convolution_v2
INFO: input.1.input: name: tf.compat.v1.pad_7/Pad:0 shape: (1, 50, 50, 4) dtype: <dtype: 'float32'>
INFO: input.2.weights: shape: (3, 3, 4, 4) dtype: float32
INFO: input.3.bias: shape: (4,) dtype: float32
INFO: output.1.output: name: tf.math.add_9/Add:0 shape: (1, 48, 48, 4) dtype: <dtype: 'float32'>

INFO: onnx_op_type: Concat onnx_op_name: Concat_17
INFO: input_name.1: onnx::Concat_470 shape: None dtype: None
INFO: input_name.2: onnx::Concat_476 shape: None dtype: None
INFO: input_name.3: onnx::Concat_482 shape: None dtype: None
INFO: output_name.1: input.68 shape: None dtype: None
ERROR: The trace log is below.
Traceback (most recent call last):
  File "c:\users\user\appdata\local\programs\python\python38\lib\site-packages\onnx2tf\utils\common_functions.py", line 176, in print_wrapper_func
    result = func(*args, **kwargs)
  File "c:\users\user\appdata\local\programs\python\python38\lib\site-packages\onnx2tf\utils\common_functions.py", line 225, in inverted_operation_enable_disable_wrapper_func
    result = func(*args, **kwargs)
  File "c:\users\user\appdata\local\programs\python\python38\lib\site-packages\onnx2tf\ops\Concat.py", line 61, in make_node
    tensor_rank=len(shape),
TypeError: object of type 'NoneType' has no len()
PINTO0309 commented 1 year ago

First, optimize the model with onnx-simplifier. (This work is expected to become unnecessary in the future.)

onnxsim model.onnx model.onnx

Next, create a JSON file to customize the shape transformation behavior of Reshape_77. https://github.com/PINTO0309/onnx2tf#parameter-replacement

PINTO0309 commented 1 year ago

Fixes: https://github.com/PINTO0309/onnx2tf/compare/0.0.27...0.0.28 https://github.com/PINTO0309/onnx2tf/releases/tag/0.0.28

wwdok commented 1 year ago

Hi @PINTO0309 , the 0.0.28 seems not fully fix this issue, i upgrade to 0.0.28 an rerun onnx2tf -i model.onnx without -prf replace.json, it throw another error:

...
INFO: onnx_op_type: Mul onnx_op_name: Mul_78
INFO: input_name.1: onnx::Reshape_396 shape: [1, 128, 6, 6] dtype: float32
INFO: input_name.2: onnx::Mul_419 shape: [1, 128, 1, 1] dtype: float32
INFO: output_name.1: onnx::MaxPool_420 shape: [1, 128, 6, 6] dtype: float32
ERROR: The trace log is below.
Traceback (most recent call last):
  File "c:\users\user\appdata\local\programs\python\python38\lib\site-packages\onnx2tf\utils\common_functions.py", line 176, in print_wrapper_func
line 225, in inverted_operation_enable_disable_wrapper_func
    result = func(*args, **kwargs)
  File "c:\users\user\appdata\local\programs\python\python38\lib\site-packages\onnx2tf\utils\common_functions.py", line 32, in get_replacement_parameter_wrapper_func
    func(*args, **kwargs)
  File "c:\users\user\appdata\local\programs\python\python38\lib\site-packages\onnx2tf\ops\Mul.py", line 84, in make_node
    tf.math.multiply(
  File "c:\users\user\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\util\traceback_utils.py", line 153, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "c:\users\user\appdata\local\programs\python\python38\lib\site-packages\keras\layers\core\tf_op_layer.py", line 107, in handle
    return TFOpLambda(op)(*args, **kwargs)
  File "c:\users\user\appdata\local\programs\python\python38\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
ValueError: Exception encountered when calling layer "tf.math.multiply_4" (type TFOpLambda).

Dimensions must be equal, but are 6 and 128 for '{{node tf.math.multiply_4/Mul}} = Mul[T=DT_FLOAT](Placeholder, Placeholder_1)' with input shapes: [1,6,6,128], [1,128,1,1].

Call arguments received by layer "tf.math.multiply_4" (type TFOpLambda):
  • x=tf.Tensor(shape=(1, 6, 6, 128), dtype=float32)
  • y=tf.Tensor(shape=(1, 128, 1, 1), dtype=float32)
  • name='Mul_78'

If you say in this case whose Reshape OP with dimensional decompression, it must specify -prf replace.json, that's also fine ~

PINTO0309 commented 1 year ago

If you say in this case whose Reshape OP with dimensional decompression, it must specify -prf replace.json, that's also fine ~

You are right.

In order to accommodate models such as YOLOv5 and YOLOv7, which contain a large number of multi-dimensional Reshapes and Transposes beyond 5 dimensions, it is necessary to adjust the behavior of the tool by specifying the -prf parameter at this time.

It has been two years without coming up with a good plan of action on this issue. I welcome pull requests if you have any good ideas.