PINTO0309 / onnx2tf

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
MIT License
708 stars 73 forks source link

Unet Int8 quatiesation #723

Closed ElectroMonk closed 1 week ago

ElectroMonk commented 1 week ago

Issue Type

Others

OS

Linux

onnx2tf version number

1.26.1

onnx version number

1.17.0

onnxruntime version number

1.19.2

onnxsim (onnx_simplifier) version number

0.4.36

tensorflow version number

2.17.0

Download URL for ONNX

https://drive.google.com/file/d/1PwK_DwBY2DP3jo-Z19mqgKTK6mOcVKE2/view

Parameter Replacement JSON

{
    "format_version": 1,
    "operations": [
      {
        "op_name": "wa/decoder1/dec1conv2/Conv_output_0",
        "param_target": "outputs",
        "param_name": "Add:0",
        "post_process_transpose_perm": [0,3,2,1]
      }
    ,
    {
        "op_name": "wa/conv/Conv",
        "param_target": "outputs",
        "param_name": "/Add:0",
        "post_process_transpose_perm": [0,3,2,1]
      }   
    ]
}

Description

hello I'm trying to convert a Unet to Int8. The encoding in Float32 and Float16 works, as well as the daýnamic quatiesation. But I would like to use the quatiesation with Int8 and Int32 (integer_quant or full_integer_quant). The accuracy of the TFLite integer_quant models drops dramatically and can no longer be used. I get a unmatched of three layers, probably the error is here. Unfortunately, I'm not quite sure how to use the Replacement JSON, I tried specifying Replacement JSON file.

Bildschirmfoto vom 2024-11-16 16-36-44

When I inspect the integer_qunat TFLite with Netron it is worth mentioning that the biasis of the last layers are almost dead. Would be great if someone could help me and thanks in advance! The complete output is down below.

Bildschirmfoto vom 2024-11-16 16-32-11

logs ``` onnx2tf -cotof -i Unet.onnx -o UNetTFL -oiqt Model optimizing started ============================================================ Simplifying... Finish! Here is the difference: ┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓ ┃ ┃ Original Model ┃ Simplified Model ┃ ┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩ │ Concat │ 4 │ 4 │ │ Constant │ 46 │ 46 │ │ Conv │ 19 │ 19 │ │ ConvTranspose │ 4 │ 4 │ │ MaxPool │ 4 │ 4 │ │ Relu │ 18 │ 18 │ │ Sigmoid │ 1 │ 1 │ │ Model Size │ 29.6MiB │ 29.6MiB │ └───────────────┴────────────────┴──────────────────┘ Simplifying... Finish! Here is the difference: ┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓ ┃ ┃ Original Model ┃ Simplified Model ┃ ┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩ │ Concat │ 4 │ 4 │ │ Constant │ 46 │ 46 │ │ Conv │ 19 │ 19 │ │ ConvTranspose │ 4 │ 4 │ │ MaxPool │ 4 │ 4 │ │ Relu │ 18 │ 18 │ │ Sigmoid │ 1 │ 1 │ │ Model Size │ 29.6MiB │ 29.6MiB │ └───────────────┴────────────────┴──────────────────┘ Simplifying... Finish! Here is the difference: ┏━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓ ┃ ┃ Original Model ┃ Simplified Model ┃ ┡━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩ │ Concat │ 4 │ 4 │ │ Constant │ 46 │ 46 │ │ Conv │ 19 │ 19 │ │ ConvTranspose │ 4 │ 4 │ │ MaxPool │ 4 │ 4 │ │ Relu │ 18 │ 18 │ │ Sigmoid │ 1 │ 1 │ │ Model Size │ 29.6MiB │ 29.6MiB │ └───────────────┴────────────────┴──────────────────┘ Model optimizing complete! Automatic generation of each OP name started ======================================== Automatic generation of each OP name complete! Model loaded ======================================================================== Model conversion started ============================================================ INFO: input_op_name: input shape: [1, 3, 256, 256] dtype: float32 INFO: 2 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/encoder1/enc1conv1/Conv INFO: input_name.1: input shape: [1, 3, 256, 256] dtype: float32 INFO: input_name.2: onnx::Conv_188 shape: [32, 3, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_189 shape: [32] dtype: float32 INFO: output_name.1: wa/encoder1/enc1conv1/Conv_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: input shape: (1, 256, 256, 3) dtype: INFO: input.2.weights: shape: (3, 3, 3, 32) dtype: INFO: input.3.bias: shape: (32,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add/Add:0 shape: (1, 256, 256, 32) dtype: INFO: 3 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/encoder1/enc1relu1/Relu INFO: input_name.1: wa/encoder1/enc1conv1/Conv_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: output_name.1: wa/encoder1/enc1relu1/Relu_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add/Add:0 shape: (1, 256, 256, 32) dtype: INFO: output.1.output: name: tf.nn.relu/Relu:0 shape: (1, 256, 256, 32) dtype: INFO: 4 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/encoder1/enc1conv2/Conv INFO: input_name.1: wa/encoder1/enc1relu1/Relu_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: input_name.2: onnx::Conv_191 shape: [32, 32, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_192 shape: [32] dtype: float32 INFO: output_name.1: wa/encoder1/enc1conv2/Conv_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.relu/Relu:0 shape: (1, 256, 256, 32) dtype: INFO: input.2.weights: shape: (3, 3, 32, 32) dtype: INFO: input.3.bias: shape: (32,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_1/Add:0 shape: (1, 256, 256, 32) dtype: INFO: 5 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/encoder1/enc1relu2/Relu INFO: input_name.1: wa/encoder1/enc1conv2/Conv_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: output_name.1: wa/encoder1/enc1relu2/Relu_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_1/Add:0 shape: (1, 256, 256, 32) dtype: INFO: output.1.output: name: tf.nn.relu_1/Relu:0 shape: (1, 256, 256, 32) dtype: INFO: 6 / 51 INFO: onnx_op_type: MaxPool onnx_op_name: wa/pool1/MaxPool INFO: input_name.1: wa/encoder1/enc1relu2/Relu_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: output_name.1: wa/pool1/MaxPool_output_0 shape: [1, 32, 128, 128] dtype: float32 INFO: tf_op_type: max_pool_v2 INFO: input.1.input: name: tf.nn.relu_1/Relu:0 shape: (1, 256, 256, 32) dtype: INFO: input.2.filters: INFO: input.3.kernel_shape: val: [2, 2] INFO: input.4.strides: val: [2, 2] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: [0, 0, 0, 0] INFO: input.7.ceil_mode: val: False INFO: output.1.output0: name: tf.nn.max_pool2d/MaxPool2d:0 shape: (1, 128, 128, 32) dtype: INFO: output.2.output1: INFO: 7 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/encoder2/enc2conv1/Conv INFO: input_name.1: wa/pool1/MaxPool_output_0 shape: [1, 32, 128, 128] dtype: float32 INFO: input_name.2: onnx::Conv_194 shape: [64, 32, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_195 shape: [64] dtype: float32 INFO: output_name.1: wa/encoder2/enc2conv1/Conv_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.max_pool2d/MaxPool2d:0 shape: (1, 128, 128, 32) dtype: INFO: input.2.weights: shape: (3, 3, 32, 64) dtype: INFO: input.3.bias: shape: (64,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_2/Add:0 shape: (1, 128, 128, 64) dtype: INFO: 8 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/encoder2/enc2relu1/Relu INFO: input_name.1: wa/encoder2/enc2conv1/Conv_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: output_name.1: wa/encoder2/enc2relu1/Relu_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_2/Add:0 shape: (1, 128, 128, 64) dtype: INFO: output.1.output: name: tf.nn.relu_2/Relu:0 shape: (1, 128, 128, 64) dtype: INFO: 9 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/encoder2/enc2conv2/Conv INFO: input_name.1: wa/encoder2/enc2relu1/Relu_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: input_name.2: onnx::Conv_197 shape: [64, 64, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_198 shape: [64] dtype: float32 INFO: output_name.1: wa/encoder2/enc2conv2/Conv_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.relu_2/Relu:0 shape: (1, 128, 128, 64) dtype: INFO: input.2.weights: shape: (3, 3, 64, 64) dtype: INFO: input.3.bias: shape: (64,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_3/Add:0 shape: (1, 128, 128, 64) dtype: INFO: 10 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/encoder2/enc2relu2/Relu INFO: input_name.1: wa/encoder2/enc2conv2/Conv_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: output_name.1: wa/encoder2/enc2relu2/Relu_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_3/Add:0 shape: (1, 128, 128, 64) dtype: INFO: output.1.output: name: tf.nn.relu_3/Relu:0 shape: (1, 128, 128, 64) dtype: INFO: 11 / 51 INFO: onnx_op_type: MaxPool onnx_op_name: wa/pool2/MaxPool INFO: input_name.1: wa/encoder2/enc2relu2/Relu_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: output_name.1: wa/pool2/MaxPool_output_0 shape: [1, 64, 64, 64] dtype: float32 INFO: tf_op_type: max_pool_v2 INFO: input.1.input: name: tf.nn.relu_3/Relu:0 shape: (1, 128, 128, 64) dtype: INFO: input.2.filters: INFO: input.3.kernel_shape: val: [2, 2] INFO: input.4.strides: val: [2, 2] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: [0, 0, 0, 0] INFO: input.7.ceil_mode: val: False INFO: output.1.output0: name: tf.nn.max_pool2d_1/MaxPool2d:0 shape: (1, 64, 64, 64) dtype: INFO: output.2.output1: INFO: 12 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/encoder3/enc3conv1/Conv INFO: input_name.1: wa/pool2/MaxPool_output_0 shape: [1, 64, 64, 64] dtype: float32 INFO: input_name.2: onnx::Conv_200 shape: [128, 64, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_201 shape: [128] dtype: float32 INFO: output_name.1: wa/encoder3/enc3conv1/Conv_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.compat.v1.transpose/transpose:0 shape: (1, 64, 64, 64) dtype: INFO: input.2.weights: shape: (3, 3, 64, 128) dtype: INFO: input.3.bias: shape: (128,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_6/Add:0 shape: (1, 64, 64, 128) dtype: INFO: 13 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/encoder3/enc3relu1/Relu INFO: input_name.1: wa/encoder3/enc3conv1/Conv_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: output_name.1: wa/encoder3/enc3relu1/Relu_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_6/Add:0 shape: (1, 64, 64, 128) dtype: INFO: output.1.output: name: tf.nn.relu_4/Relu:0 shape: (1, 64, 64, 128) dtype: INFO: 14 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/encoder3/enc3conv2/Conv INFO: input_name.1: wa/encoder3/enc3relu1/Relu_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: input_name.2: onnx::Conv_203 shape: [128, 128, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_204 shape: [128] dtype: float32 INFO: output_name.1: wa/encoder3/enc3conv2/Conv_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.relu_4/Relu:0 shape: (1, 64, 64, 128) dtype: INFO: input.2.weights: shape: (3, 3, 128, 128) dtype: INFO: input.3.bias: shape: (128,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_7/Add:0 shape: (1, 64, 64, 128) dtype: INFO: 15 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/encoder3/enc3relu2/Relu INFO: input_name.1: wa/encoder3/enc3conv2/Conv_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: output_name.1: wa/encoder3/enc3relu2/Relu_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_7/Add:0 shape: (1, 64, 64, 128) dtype: INFO: output.1.output: name: tf.nn.relu_5/Relu:0 shape: (1, 64, 64, 128) dtype: INFO: 16 / 51 INFO: onnx_op_type: MaxPool onnx_op_name: wa/pool3/MaxPool INFO: input_name.1: wa/encoder3/enc3relu2/Relu_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: output_name.1: wa/pool3/MaxPool_output_0 shape: [1, 128, 32, 32] dtype: float32 INFO: tf_op_type: max_pool_v2 INFO: input.1.input: name: tf.nn.relu_5/Relu:0 shape: (1, 64, 64, 128) dtype: INFO: input.2.filters: INFO: input.3.kernel_shape: val: [2, 2] INFO: input.4.strides: val: [2, 2] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: [0, 0, 0, 0] INFO: input.7.ceil_mode: val: False INFO: output.1.output0: name: tf.nn.max_pool2d_2/MaxPool2d:0 shape: (1, 32, 32, 128) dtype: INFO: output.2.output1: INFO: 17 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/encoder4/enc4conv1/Conv INFO: input_name.1: wa/pool3/MaxPool_output_0 shape: [1, 128, 32, 32] dtype: float32 INFO: input_name.2: onnx::Conv_206 shape: [256, 128, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_207 shape: [256] dtype: float32 INFO: output_name.1: wa/encoder4/enc4conv1/Conv_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.max_pool2d_2/MaxPool2d:0 shape: (1, 32, 32, 128) dtype: INFO: input.2.weights: shape: (3, 3, 128, 256) dtype: INFO: input.3.bias: shape: (256,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_8/Add:0 shape: (1, 32, 32, 256) dtype: INFO: 18 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/encoder4/enc4relu1/Relu INFO: input_name.1: wa/encoder4/enc4conv1/Conv_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: output_name.1: wa/encoder4/enc4relu1/Relu_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_8/Add:0 shape: (1, 32, 32, 256) dtype: INFO: output.1.output: name: tf.nn.relu_6/Relu:0 shape: (1, 32, 32, 256) dtype: INFO: 19 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/encoder4/enc4conv2/Conv INFO: input_name.1: wa/encoder4/enc4relu1/Relu_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: input_name.2: onnx::Conv_209 shape: [256, 256, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_210 shape: [256] dtype: float32 INFO: output_name.1: wa/encoder4/enc4conv2/Conv_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.relu_6/Relu:0 shape: (1, 32, 32, 256) dtype: INFO: input.2.weights: shape: (3, 3, 256, 256) dtype: INFO: input.3.bias: shape: (256,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_9/Add:0 shape: (1, 32, 32, 256) dtype: INFO: 20 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/encoder4/enc4relu2/Relu INFO: input_name.1: wa/encoder4/enc4conv2/Conv_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: output_name.1: wa/encoder4/enc4relu2/Relu_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_9/Add:0 shape: (1, 32, 32, 256) dtype: INFO: output.1.output: name: tf.nn.relu_7/Relu:0 shape: (1, 32, 32, 256) dtype: INFO: 21 / 51 INFO: onnx_op_type: MaxPool onnx_op_name: wa/pool4/MaxPool INFO: input_name.1: wa/encoder4/enc4relu2/Relu_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: output_name.1: wa/pool4/MaxPool_output_0 shape: [1, 256, 16, 16] dtype: float32 INFO: tf_op_type: max_pool_v2 INFO: input.1.input: name: tf.nn.relu_7/Relu:0 shape: (1, 32, 32, 256) dtype: INFO: input.2.filters: INFO: input.3.kernel_shape: val: [2, 2] INFO: input.4.strides: val: [2, 2] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: [0, 0, 0, 0] INFO: input.7.ceil_mode: val: False INFO: output.1.output0: name: tf.nn.max_pool2d_3/MaxPool2d:0 shape: (1, 16, 16, 256) dtype: INFO: output.2.output1: INFO: 22 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/bottleneck/bottleneckconv1/Conv INFO: input_name.1: wa/pool4/MaxPool_output_0 shape: [1, 256, 16, 16] dtype: float32 INFO: input_name.2: onnx::Conv_212 shape: [512, 256, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_213 shape: [512] dtype: float32 INFO: output_name.1: wa/bottleneck/bottleneckconv1/Conv_output_0 shape: [1, 512, 16, 16] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.max_pool2d_3/MaxPool2d:0 shape: (1, 16, 16, 256) dtype: INFO: input.2.weights: shape: (3, 3, 256, 512) dtype: INFO: input.3.bias: shape: (512,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_10/Add:0 shape: (1, 16, 16, 512) dtype: INFO: 23 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/bottleneck/bottleneckrelu1/Relu INFO: input_name.1: wa/bottleneck/bottleneckconv1/Conv_output_0 shape: [1, 512, 16, 16] dtype: float32 INFO: output_name.1: wa/bottleneck/bottleneckrelu1/Relu_output_0 shape: [1, 512, 16, 16] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_10/Add:0 shape: (1, 16, 16, 512) dtype: INFO: output.1.output: name: tf.nn.relu_8/Relu:0 shape: (1, 16, 16, 512) dtype: INFO: 24 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/bottleneck/bottleneckconv2/Conv INFO: input_name.1: wa/bottleneck/bottleneckrelu1/Relu_output_0 shape: [1, 512, 16, 16] dtype: float32 INFO: input_name.2: onnx::Conv_215 shape: [512, 512, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_216 shape: [512] dtype: float32 INFO: output_name.1: wa/bottleneck/bottleneckconv2/Conv_output_0 shape: [1, 512, 16, 16] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.relu_8/Relu:0 shape: (1, 16, 16, 512) dtype: INFO: input.2.weights: shape: (3, 3, 512, 512) dtype: INFO: input.3.bias: shape: (512,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_11/Add:0 shape: (1, 16, 16, 512) dtype: INFO: 25 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/bottleneck/bottleneckrelu2/Relu INFO: input_name.1: wa/bottleneck/bottleneckconv2/Conv_output_0 shape: [1, 512, 16, 16] dtype: float32 INFO: output_name.1: wa/bottleneck/bottleneckrelu2/Relu_output_0 shape: [1, 512, 16, 16] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_11/Add:0 shape: (1, 16, 16, 512) dtype: INFO: output.1.output: name: tf.nn.relu_9/Relu:0 shape: (1, 16, 16, 512) dtype: INFO: 26 / 51 INFO: onnx_op_type: ConvTranspose onnx_op_name: wa/upconv4/ConvTranspose INFO: input_name.1: wa/bottleneck/bottleneckrelu2/Relu_output_0 shape: [1, 512, 16, 16] dtype: float32 INFO: input_name.2: upconv4.weight shape: [512, 256, 2, 2] dtype: float32 INFO: input_name.3: upconv4.bias shape: [256] dtype: float32 INFO: output_name.1: wa/upconv4/ConvTranspose_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: tf_op_type: conv2d_transpose_v2 INFO: input.1.input: name: tf.nn.relu_9/Relu:0 shape: (1, 16, 16, 512) dtype: INFO: input.2.filters: shape: (2, 2, 256, 512) dtype: INFO: input.3.output_shape: val: [1, 32, 32, 256] INFO: input.4.strides: val: [2, 2] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: VALID INFO: input.7.group: val: 1 INFO: input.8.bias: shape: (256,) dtype: float32 INFO: output.1.output: name: tf.math.add_12/Add:0 shape: (1, 32, 32, 256) dtype: INFO: 27 / 51 INFO: onnx_op_type: Concat onnx_op_name: wa/Concat INFO: input_name.1: wa/upconv4/ConvTranspose_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: input_name.2: wa/encoder4/enc4relu2/Relu_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: output_name.1: wa/Concat_output_0 shape: [1, 512, 32, 32] dtype: float32 INFO: tf_op_type: concat INFO: input.1.input0: name: tf.math.add_12/Add:0 shape: (1, 32, 32, 256) dtype: INFO: input.2.input1: name: tf.nn.relu_7/Relu:0 shape: (1, 32, 32, 256) dtype: INFO: input.3.axis: val: 3 INFO: output.1.output: name: tf.concat/concat:0 shape: (1, 32, 32, 512) dtype: INFO: 28 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/decoder4/dec4conv1/Conv INFO: input_name.1: wa/Concat_output_0 shape: [1, 512, 32, 32] dtype: float32 INFO: input_name.2: onnx::Conv_218 shape: [256, 512, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_219 shape: [256] dtype: float32 INFO: output_name.1: wa/decoder4/dec4conv1/Conv_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.concat/concat:0 shape: (1, 32, 32, 512) dtype: INFO: input.2.weights: shape: (3, 3, 512, 256) dtype: INFO: input.3.bias: shape: (256,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_13/Add:0 shape: (1, 32, 32, 256) dtype: INFO: 29 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/decoder4/dec4relu1/Relu INFO: input_name.1: wa/decoder4/dec4conv1/Conv_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: output_name.1: wa/decoder4/dec4relu1/Relu_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_13/Add:0 shape: (1, 32, 32, 256) dtype: INFO: output.1.output: name: tf.nn.relu_10/Relu:0 shape: (1, 32, 32, 256) dtype: INFO: 30 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/decoder4/dec4conv2/Conv INFO: input_name.1: wa/decoder4/dec4relu1/Relu_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: input_name.2: onnx::Conv_221 shape: [256, 256, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_222 shape: [256] dtype: float32 INFO: output_name.1: wa/decoder4/dec4conv2/Conv_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.relu_10/Relu:0 shape: (1, 32, 32, 256) dtype: INFO: input.2.weights: shape: (3, 3, 256, 256) dtype: INFO: input.3.bias: shape: (256,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_14/Add:0 shape: (1, 32, 32, 256) dtype: INFO: 31 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/decoder4/dec4relu2/Relu INFO: input_name.1: wa/decoder4/dec4conv2/Conv_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: output_name.1: wa/decoder4/dec4relu2/Relu_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_14/Add:0 shape: (1, 32, 32, 256) dtype: INFO: output.1.output: name: tf.nn.relu_11/Relu:0 shape: (1, 32, 32, 256) dtype: INFO: 32 / 51 INFO: onnx_op_type: ConvTranspose onnx_op_name: wa/upconv3/ConvTranspose INFO: input_name.1: wa/decoder4/dec4relu2/Relu_output_0 shape: [1, 256, 32, 32] dtype: float32 INFO: input_name.2: upconv3.weight shape: [256, 128, 2, 2] dtype: float32 INFO: input_name.3: upconv3.bias shape: [128] dtype: float32 INFO: output_name.1: wa/upconv3/ConvTranspose_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: tf_op_type: conv2d_transpose_v2 INFO: input.1.input: name: tf.nn.relu_11/Relu:0 shape: (1, 32, 32, 256) dtype: INFO: input.2.filters: shape: (2, 2, 128, 256) dtype: INFO: input.3.output_shape: val: [1, 64, 64, 128] INFO: input.4.strides: val: [2, 2] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: VALID INFO: input.7.group: val: 1 INFO: input.8.bias: shape: (128,) dtype: float32 INFO: output.1.output: name: tf.math.add_15/Add:0 shape: (1, 64, 64, 128) dtype: INFO: 33 / 51 INFO: onnx_op_type: Concat onnx_op_name: wa/Concat_1 INFO: input_name.1: wa/upconv3/ConvTranspose_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: input_name.2: wa/encoder3/enc3relu2/Relu_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: output_name.1: wa/Concat_1_output_0 shape: [1, 256, 64, 64] dtype: float32 INFO: tf_op_type: concat INFO: input.1.input0: name: tf.math.add_15/Add:0 shape: (1, 64, 64, 128) dtype: INFO: input.2.input1: name: tf.nn.relu_5/Relu:0 shape: (1, 64, 64, 128) dtype: INFO: input.3.axis: val: 3 INFO: output.1.output: name: tf.concat_1/concat:0 shape: (1, 64, 64, 256) dtype: INFO: 34 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/decoder3/dec3conv1/Conv INFO: input_name.1: wa/Concat_1_output_0 shape: [1, 256, 64, 64] dtype: float32 INFO: input_name.2: onnx::Conv_224 shape: [128, 256, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_225 shape: [128] dtype: float32 INFO: output_name.1: wa/decoder3/dec3conv1/Conv_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.concat_1/concat:0 shape: (1, 64, 64, 256) dtype: INFO: input.2.weights: shape: (3, 3, 256, 128) dtype: INFO: input.3.bias: shape: (128,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_16/Add:0 shape: (1, 64, 64, 128) dtype: INFO: 35 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/decoder3/dec3relu1/Relu INFO: input_name.1: wa/decoder3/dec3conv1/Conv_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: output_name.1: wa/decoder3/dec3relu1/Relu_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_16/Add:0 shape: (1, 64, 64, 128) dtype: INFO: output.1.output: name: tf.nn.relu_12/Relu:0 shape: (1, 64, 64, 128) dtype: INFO: 36 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/decoder3/dec3conv2/Conv INFO: input_name.1: wa/decoder3/dec3relu1/Relu_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: input_name.2: onnx::Conv_227 shape: [128, 128, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_228 shape: [128] dtype: float32 INFO: output_name.1: wa/decoder3/dec3conv2/Conv_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.relu_12/Relu:0 shape: (1, 64, 64, 128) dtype: INFO: input.2.weights: shape: (3, 3, 128, 128) dtype: INFO: input.3.bias: shape: (128,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_17/Add:0 shape: (1, 64, 64, 128) dtype: INFO: 37 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/decoder3/dec3relu2/Relu INFO: input_name.1: wa/decoder3/dec3conv2/Conv_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: output_name.1: wa/decoder3/dec3relu2/Relu_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_17/Add:0 shape: (1, 64, 64, 128) dtype: INFO: output.1.output: name: tf.nn.relu_13/Relu:0 shape: (1, 64, 64, 128) dtype: INFO: 38 / 51 INFO: onnx_op_type: ConvTranspose onnx_op_name: wa/upconv2/ConvTranspose INFO: input_name.1: wa/decoder3/dec3relu2/Relu_output_0 shape: [1, 128, 64, 64] dtype: float32 INFO: input_name.2: upconv2.weight shape: [128, 64, 2, 2] dtype: float32 INFO: input_name.3: upconv2.bias shape: [64] dtype: float32 INFO: output_name.1: wa/upconv2/ConvTranspose_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: tf_op_type: conv2d_transpose_v2 INFO: input.1.input: name: tf.nn.relu_13/Relu:0 shape: (1, 64, 64, 128) dtype: INFO: input.2.filters: shape: (2, 2, 64, 128) dtype: INFO: input.3.output_shape: val: [1, 128, 128, 64] INFO: input.4.strides: val: [2, 2] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: VALID INFO: input.7.group: val: 1 INFO: input.8.bias: shape: (64,) dtype: float32 INFO: output.1.output: name: tf.math.add_18/Add:0 shape: (1, 128, 128, 64) dtype: INFO: 39 / 51 INFO: onnx_op_type: Concat onnx_op_name: wa/Concat_2 INFO: input_name.1: wa/upconv2/ConvTranspose_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: input_name.2: wa/encoder2/enc2relu2/Relu_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: output_name.1: wa/Concat_2_output_0 shape: [1, 128, 128, 128] dtype: float32 INFO: tf_op_type: concat INFO: input.1.input0: name: tf.math.add_18/Add:0 shape: (1, 128, 128, 64) dtype: INFO: input.2.input1: name: tf.nn.relu_3/Relu:0 shape: (1, 128, 128, 64) dtype: INFO: input.3.axis: val: 3 INFO: output.1.output: name: tf.concat_2/concat:0 shape: (1, 128, 128, 128) dtype: INFO: 40 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/decoder2/dec2conv1/Conv INFO: input_name.1: wa/Concat_2_output_0 shape: [1, 128, 128, 128] dtype: float32 INFO: input_name.2: onnx::Conv_230 shape: [64, 128, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_231 shape: [64] dtype: float32 INFO: output_name.1: wa/decoder2/dec2conv1/Conv_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.compat.v1.transpose_1/transpose:0 shape: (1, 128, 128, 128) dtype: INFO: input.2.weights: shape: (3, 3, 128, 64) dtype: INFO: input.3.bias: shape: (64,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_21/Add:0 shape: (1, 128, 128, 64) dtype: INFO: 41 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/decoder2/dec2relu1/Relu INFO: input_name.1: wa/decoder2/dec2conv1/Conv_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: output_name.1: wa/decoder2/dec2relu1/Relu_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_21/Add:0 shape: (1, 128, 128, 64) dtype: INFO: output.1.output: name: tf.nn.relu_14/Relu:0 shape: (1, 128, 128, 64) dtype: INFO: 42 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/decoder2/dec2conv2/Conv INFO: input_name.1: wa/decoder2/dec2relu1/Relu_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: input_name.2: onnx::Conv_233 shape: [64, 64, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_234 shape: [64] dtype: float32 INFO: output_name.1: wa/decoder2/dec2conv2/Conv_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.relu_14/Relu:0 shape: (1, 128, 128, 64) dtype: INFO: input.2.weights: shape: (3, 3, 64, 64) dtype: INFO: input.3.bias: shape: (64,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_22/Add:0 shape: (1, 128, 128, 64) dtype: INFO: 43 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/decoder2/dec2relu2/Relu INFO: input_name.1: wa/decoder2/dec2conv2/Conv_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: output_name.1: wa/decoder2/dec2relu2/Relu_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_22/Add:0 shape: (1, 128, 128, 64) dtype: INFO: output.1.output: name: tf.nn.relu_15/Relu:0 shape: (1, 128, 128, 64) dtype: INFO: 44 / 51 INFO: onnx_op_type: ConvTranspose onnx_op_name: wa/upconv1/ConvTranspose INFO: input_name.1: wa/decoder2/dec2relu2/Relu_output_0 shape: [1, 64, 128, 128] dtype: float32 INFO: input_name.2: upconv1.weight shape: [64, 32, 2, 2] dtype: float32 INFO: input_name.3: upconv1.bias shape: [32] dtype: float32 INFO: output_name.1: wa/upconv1/ConvTranspose_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: tf_op_type: conv2d_transpose_v2 INFO: input.1.input: name: tf.nn.relu_15/Relu:0 shape: (1, 128, 128, 64) dtype: INFO: input.2.filters: shape: (2, 2, 32, 64) dtype: INFO: input.3.output_shape: val: [1, 256, 256, 32] INFO: input.4.strides: val: [2, 2] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: VALID INFO: input.7.group: val: 1 INFO: input.8.bias: shape: (32,) dtype: float32 INFO: output.1.output: name: tf.math.add_23/Add:0 shape: (1, 256, 256, 32) dtype: INFO: 45 / 51 INFO: onnx_op_type: Concat onnx_op_name: wa/Concat_3 INFO: input_name.1: wa/upconv1/ConvTranspose_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: input_name.2: wa/encoder1/enc1relu2/Relu_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: output_name.1: wa/Concat_3_output_0 shape: [1, 64, 256, 256] dtype: float32 INFO: tf_op_type: concat INFO: input.1.input0: name: tf.math.add_23/Add:0 shape: (1, 256, 256, 32) dtype: INFO: input.2.input1: name: tf.nn.relu_1/Relu:0 shape: (1, 256, 256, 32) dtype: INFO: input.3.axis: val: 3 INFO: output.1.output: name: tf.concat_3/concat:0 shape: (1, 256, 256, 64) dtype: INFO: 46 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/decoder1/dec1conv1/Conv INFO: input_name.1: wa/Concat_3_output_0 shape: [1, 64, 256, 256] dtype: float32 INFO: input_name.2: onnx::Conv_236 shape: [32, 64, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_237 shape: [32] dtype: float32 INFO: output_name.1: wa/decoder1/dec1conv1/Conv_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.concat_3/concat:0 shape: (1, 256, 256, 64) dtype: INFO: input.2.weights: shape: (3, 3, 64, 32) dtype: INFO: input.3.bias: shape: (32,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_24/Add:0 shape: (1, 256, 256, 32) dtype: INFO: 47 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/decoder1/dec1relu1/Relu INFO: input_name.1: wa/decoder1/dec1conv1/Conv_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: output_name.1: wa/decoder1/dec1relu1/Relu_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_24/Add:0 shape: (1, 256, 256, 32) dtype: INFO: output.1.output: name: tf.nn.relu_16/Relu:0 shape: (1, 256, 256, 32) dtype: INFO: 48 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/decoder1/dec1conv2/Conv INFO: input_name.1: wa/decoder1/dec1relu1/Relu_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: input_name.2: onnx::Conv_239 shape: [32, 32, 3, 3] dtype: float32 INFO: input_name.3: onnx::Conv_240 shape: [32] dtype: float32 INFO: output_name.1: wa/decoder1/dec1conv2/Conv_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.relu_16/Relu:0 shape: (1, 256, 256, 32) dtype: INFO: input.2.weights: shape: (3, 3, 32, 32) dtype: INFO: input.3.bias: shape: (32,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_25/Add:0 shape: (1, 256, 256, 32) dtype: INFO: 49 / 51 INFO: onnx_op_type: Relu onnx_op_name: wa/decoder1/dec1relu2/Relu INFO: input_name.1: wa/decoder1/dec1conv2/Conv_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: output_name.1: wa/decoder1/dec1relu2/Relu_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: tf_op_type: relu INFO: input.1.features: name: tf.math.add_25/Add:0 shape: (1, 256, 256, 32) dtype: INFO: output.1.output: name: tf.nn.relu_17/Relu:0 shape: (1, 256, 256, 32) dtype: INFO: 50 / 51 INFO: onnx_op_type: Conv onnx_op_name: wa/conv/Conv INFO: input_name.1: wa/decoder1/dec1relu2/Relu_output_0 shape: [1, 32, 256, 256] dtype: float32 INFO: input_name.2: conv.weight shape: [3, 32, 1, 1] dtype: float32 INFO: input_name.3: conv.bias shape: [3] dtype: float32 INFO: output_name.1: wa/conv/Conv_output_0 shape: [1, 3, 256, 256] dtype: float32 INFO: tf_op_type: convolution_v2 INFO: input.1.input: name: tf.nn.relu_17/Relu:0 shape: (1, 256, 256, 32) dtype: INFO: input.2.weights: shape: (1, 1, 32, 3) dtype: INFO: input.3.bias: shape: (3,) dtype: INFO: input.4.strides: val: [1, 1] INFO: input.5.dilations: val: [1, 1] INFO: input.6.padding: val: SAME INFO: input.7.group: val: 1 INFO: output.1.output: name: tf.math.add_26/Add:0 shape: (1, 256, 256, 3) dtype: INFO: 51 / 51 INFO: onnx_op_type: Sigmoid onnx_op_name: wa/Sigmoid INFO: input_name.1: wa/conv/Conv_output_0 shape: [1, 3, 256, 256] dtype: float32 INFO: output_name.1: output shape: [1, 3, 256, 256] dtype: float32 INFO: tf_op_type: sigmoid INFO: input.1.x: name: tf.math.add_26/Add:0 shape: (1, 256, 256, 3) dtype: INFO: output.1.output: name: tf.math.sigmoid/Sigmoid:0 shape: (1, 256, 256, 3) dtype: saved_model output started ========================================================== Saved artifact at 'UNetTFL'. The following endpoints are available: * Endpoint 'serving_default' inputs_0 (POSITIONAL_ONLY): TensorSpec(shape=(1, 256, 256, 3), dtype=tf.float32, name='input') Output Type: TensorSpec(shape=(1, 256, 256, 3), dtype=tf.float32, name=None) Captures: 130825701817888: TensorSpec(shape=(3, 3, 3, 32), dtype=tf.float32, name=None) 130825700524272: TensorSpec(shape=(32,), dtype=tf.float32, name=None) 130825700538000: TensorSpec(shape=(3, 3, 32, 32), dtype=tf.float32, name=None) 130825700538528: TensorSpec(shape=(32,), dtype=tf.float32, name=None) 130825700942496: TensorSpec(shape=(3, 3, 32, 64), dtype=tf.float32, name=None) 130825700941968: TensorSpec(shape=(64,), dtype=tf.float32, name=None) 130825701002400: TensorSpec(shape=(3, 3, 64, 64), dtype=tf.float32, name=None) 130825701002928: TensorSpec(shape=(64,), dtype=tf.float32, name=None) 130825700933696: TensorSpec(shape=(3, 3, 64, 128), dtype=tf.float32, name=None) 130825853755888: TensorSpec(shape=(128,), dtype=tf.float32, name=None) 130825701015248: TensorSpec(shape=(3, 3, 128, 128), dtype=tf.float32, name=None) 130825700537472: TensorSpec(shape=(128,), dtype=tf.float32, name=None) 130825701087488: TensorSpec(shape=(3, 3, 128, 256), dtype=tf.float32, name=None) 130825701804512: TensorSpec(shape=(256,), dtype=tf.float32, name=None) 130825690708304: TensorSpec(shape=(3, 3, 256, 256), dtype=tf.float32, name=None) 130825690708656: TensorSpec(shape=(256,), dtype=tf.float32, name=None) 130825690705664: TensorSpec(shape=(3, 3, 256, 512), dtype=tf.float32, name=None) 130825690777552: TensorSpec(shape=(512,), dtype=tf.float32, name=None) 130825690792528: TensorSpec(shape=(3, 3, 512, 512), dtype=tf.float32, name=None) 130825690793056: TensorSpec(shape=(512,), dtype=tf.float32, name=None) 130825690800976: TensorSpec(shape=(2, 2, 256, 512), dtype=tf.float32, name=None) 130825690806608: TensorSpec(shape=(3, 3, 512, 256), dtype=tf.float32, name=None) 130825690807312: TensorSpec(shape=(256,), dtype=tf.float32, name=None) 130825690820528: TensorSpec(shape=(3, 3, 256, 256), dtype=tf.float32, name=None) 130825690821056: TensorSpec(shape=(256,), dtype=tf.float32, name=None) 130825690876032: TensorSpec(shape=(2, 2, 128, 256), dtype=tf.float32, name=None) 130825690809792: TensorSpec(shape=(3, 3, 256, 128), dtype=tf.float32, name=None) 130825690880960: TensorSpec(shape=(128,), dtype=tf.float32, name=None) 130825690990016: TensorSpec(shape=(3, 3, 128, 128), dtype=tf.float32, name=None) 130825690990544: TensorSpec(shape=(128,), dtype=tf.float32, name=None) 130825690996176: TensorSpec(shape=(2, 2, 64, 128), dtype=tf.float32, name=None) 130825690888176: TensorSpec(shape=(3, 3, 128, 64), dtype=tf.float32, name=None) 130825690874800: TensorSpec(shape=(64,), dtype=tf.float32, name=None) 130825700539584: TensorSpec(shape=(3, 3, 64, 64), dtype=tf.float32, name=None) 130825691056784: TensorSpec(shape=(64,), dtype=tf.float32, name=None) 130825691169712: TensorSpec(shape=(2, 2, 32, 64), dtype=tf.float32, name=None) 130825690877616: TensorSpec(shape=(3, 3, 64, 32), dtype=tf.float32, name=None) 130825691170944: TensorSpec(shape=(32,), dtype=tf.float32, name=None) 130825691181856: TensorSpec(shape=(3, 3, 32, 32), dtype=tf.float32, name=None) 130825691182208: TensorSpec(shape=(32,), dtype=tf.float32, name=None) 130825691207056: TensorSpec(shape=(1, 1, 32, 3), dtype=tf.float32, name=None) 130825691207584: TensorSpec(shape=(3,), dtype=tf.float32, name=None) saved_model output complete! WARNING: All log messages before absl::InitializeLog() is called are written to STDERR I0000 00:00:1731769599.297291 121646 devices.cc:67] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 WARNING: All log messages before absl::InitializeLog() is called are written to STDERR W0000 00:00:1731769599.672875 121646 tf_tfl_flatbuffer_helpers.cc:392] Ignored output_format. W0000 00:00:1731769599.672899 121646 tf_tfl_flatbuffer_helpers.cc:395] Ignored drop_control_dependency. Float32 tflite output complete! I0000 00:00:1731769599.948622 121646 devices.cc:67] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 W0000 00:00:1731769600.230760 121646 tf_tfl_flatbuffer_helpers.cc:392] Ignored output_format. W0000 00:00:1731769600.230781 121646 tf_tfl_flatbuffer_helpers.cc:395] Ignored drop_control_dependency. Float16 tflite output complete! I0000 00:00:1731769600.469872 121646 devices.cc:67] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 W0000 00:00:1731769600.804852 121646 tf_tfl_flatbuffer_helpers.cc:392] Ignored output_format. W0000 00:00:1731769600.804872 121646 tf_tfl_flatbuffer_helpers.cc:395] Ignored drop_control_dependency. Dynamic Range Quantization tflite output complete! Signature information for quantization signature_name: serving_default input_name.0: input shape: (1, 256, 256, 3) dtype: output_name.0: output_0 shape: (1, 256, 256, 3) dtype: W0000 00:00:1731769602.338806 121646 tf_tfl_flatbuffer_helpers.cc:392] Ignored output_format. W0000 00:00:1731769602.338825 121646 tf_tfl_flatbuffer_helpers.cc:395] Ignored drop_control_dependency. fully_quantize: 0, inference_type: 6, input_inference_type: FLOAT32, output_inference_type: FLOAT32 INT8 Quantization tflite output complete! W0000 00:00:1731769610.647038 121646 tf_tfl_flatbuffer_helpers.cc:392] Ignored output_format. W0000 00:00:1731769610.647056 121646 tf_tfl_flatbuffer_helpers.cc:395] Ignored drop_control_dependency. fully_quantize: 0, inference_type: 6, input_inference_type: INT8, output_inference_type: INT8 Full INT8 Quantization tflite output complete! W0000 00:00:1731769618.814819 121646 tf_tfl_flatbuffer_helpers.cc:392] Ignored output_format. W0000 00:00:1731769618.814837 121646 tf_tfl_flatbuffer_helpers.cc:395] Ignored drop_control_dependency. INT8 Quantization with int16 activations tflite output complete! W0000 00:00:1731769634.117146 121646 tf_tfl_flatbuffer_helpers.cc:392] Ignored output_format. W0000 00:00:1731769634.117166 121646 tf_tfl_flatbuffer_helpers.cc:395] Ignored drop_control_dependency. Full INT8 Quantization with int16 activations tflite output complete! ONNX and TF output value validation started ========================================= INFO: validation_conditions: np.allclose(onnx_outputs, tf_outputs, rtol=0.0, atol=0.0001, equal_nan=True) INFO: onnx_output_name: wa/encoder1/enc1conv1/Conv_output_0 tf_output_name: tf.math.add/Add:0 shape: (1, 32, 256, 256) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder1/enc1relu1/Relu_output_0 tf_output_name: tf.nn.relu/Relu:0 shape: (1, 32, 256, 256) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder1/enc1conv2/Conv_output_0 tf_output_name: tf.math.add_1/Add:0 shape: (1, 32, 256, 256) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder1/enc1relu2/Relu_output_0 tf_output_name: tf.nn.relu_1/Relu:0 shape: (1, 32, 256, 256) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/pool1/MaxPool_output_0 tf_output_name: tf.nn.max_pool2d/MaxPool2d:0 shape: (1, 32, 128, 128) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder2/enc2conv1/Conv_output_0 tf_output_name: tf.math.add_2/Add:0 shape: (1, 64, 128, 128) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder2/enc2relu1/Relu_output_0 tf_output_name: tf.nn.relu_2/Relu:0 shape: (1, 64, 128, 128) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder2/enc2conv2/Conv_output_0 tf_output_name: tf.math.add_3/Add:0 shape: (1, 64, 128, 128) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder2/enc2relu2/Relu_output_0 tf_output_name: tf.nn.relu_3/Relu:0 shape: (1, 64, 128, 128) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/pool2/MaxPool_output_0 tf_output_name: tf.nn.max_pool2d_1/MaxPool2d:0 shape: (1, 64, 64, 64) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder3/enc3conv1/Conv_output_0 tf_output_name: tf.math.add_6/Add:0 shape: (1, 128, 64, 64) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder3/enc3relu1/Relu_output_0 tf_output_name: tf.nn.relu_4/Relu:0 shape: (1, 128, 64, 64) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder3/enc3conv2/Conv_output_0 tf_output_name: tf.math.add_7/Add:0 shape: (1, 128, 64, 64) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder3/enc3relu2/Relu_output_0 tf_output_name: tf.nn.relu_5/Relu:0 shape: (1, 128, 64, 64) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/pool3/MaxPool_output_0 tf_output_name: tf.nn.max_pool2d_2/MaxPool2d:0 shape: (1, 128, 32, 32) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder4/enc4conv1/Conv_output_0 tf_output_name: tf.math.add_8/Add:0 shape: (1, 256, 32, 32) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder4/enc4relu1/Relu_output_0 tf_output_name: tf.nn.relu_6/Relu:0 shape: (1, 256, 32, 32) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder4/enc4conv2/Conv_output_0 tf_output_name: tf.math.add_9/Add:0 shape: (1, 256, 32, 32) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/encoder4/enc4relu2/Relu_output_0 tf_output_name: tf.nn.relu_7/Relu:0 shape: (1, 256, 32, 32) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/pool4/MaxPool_output_0 tf_output_name: tf.nn.max_pool2d_3/MaxPool2d:0 shape: (1, 256, 16, 16) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/bottleneck/bottleneckconv1/Conv_output_0 tf_output_name: tf.math.add_10/Add:0 shape: (1, 512, 16, 16) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/bottleneck/bottleneckrelu1/Relu_output_0 tf_output_name: tf.nn.relu_8/Relu:0 shape: (1, 512, 16, 16) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/bottleneck/bottleneckconv2/Conv_output_0 tf_output_name: tf.math.add_11/Add:0 shape: (1, 512, 16, 16) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/bottleneck/bottleneckrelu2/Relu_output_0 tf_output_name: tf.nn.relu_9/Relu:0 shape: (1, 512, 16, 16) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/upconv4/ConvTranspose_output_0 tf_output_name: tf.math.add_12/Add:0 shape: (1, 256, 32, 32) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/Concat_output_0 tf_output_name: tf.concat/concat:0 shape: (1, 512, 32, 32) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder4/dec4conv1/Conv_output_0 tf_output_name: tf.math.add_13/Add:0 shape: (1, 256, 32, 32) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder4/dec4relu1/Relu_output_0 tf_output_name: tf.nn.relu_10/Relu:0 shape: (1, 256, 32, 32) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder4/dec4conv2/Conv_output_0 tf_output_name: tf.math.add_14/Add:0 shape: (1, 256, 32, 32) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder4/dec4relu2/Relu_output_0 tf_output_name: tf.nn.relu_11/Relu:0 shape: (1, 256, 32, 32) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/upconv3/ConvTranspose_output_0 tf_output_name: tf.math.add_15/Add:0 shape: (1, 128, 64, 64) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/Concat_1_output_0 tf_output_name: tf.concat_1/concat:0 shape: (1, 256, 64, 64) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder3/dec3conv1/Conv_output_0 tf_output_name: tf.math.add_16/Add:0 shape: (1, 128, 64, 64) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder3/dec3relu1/Relu_output_0 tf_output_name: tf.nn.relu_12/Relu:0 shape: (1, 128, 64, 64) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder3/dec3conv2/Conv_output_0 tf_output_name: tf.math.add_17/Add:0 shape: (1, 128, 64, 64) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder3/dec3relu2/Relu_output_0 tf_output_name: tf.nn.relu_13/Relu:0 shape: (1, 128, 64, 64) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/upconv2/ConvTranspose_output_0 tf_output_name: tf.math.add_18/Add:0 shape: (1, 64, 128, 128) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/Concat_2_output_0 tf_output_name: tf.concat_2/concat:0 shape: (1, 128, 128, 128) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder2/dec2conv1/Conv_output_0 tf_output_name: tf.math.add_21/Add:0 shape: (1, 64, 128, 128) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder2/dec2relu1/Relu_output_0 tf_output_name: tf.nn.relu_14/Relu:0 shape: (1, 64, 128, 128) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder2/dec2conv2/Conv_output_0 tf_output_name: tf.math.add_22/Add:0 shape: (1, 64, 128, 128) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder2/dec2relu2/Relu_output_0 tf_output_name: tf.nn.relu_15/Relu:0 shape: (1, 64, 128, 128) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/upconv1/ConvTranspose_output_0 tf_output_name: tf.math.add_23/Add:0 shape: (1, 32, 256, 256) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/Concat_3_output_0 tf_output_name: tf.concat_3/concat:0 shape: (1, 64, 256, 256) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder1/dec1conv1/Conv_output_0 tf_output_name: tf.math.add_24/Add:0 shape: (1, 32, 256, 256) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder1/dec1relu1/Relu_output_0 tf_output_name: tf.nn.relu_16/Relu:0 shape: (1, 32, 256, 256) dtype: float32 validate_result: Matches INFO: onnx_output_name: wa/decoder1/dec1conv2/Conv_output_0 tf_output_name: tf.math.add_25/Add:0 shape: (1, 32, 256, 256) dtype: float32 validate_result: Unmatched max_abs_error: 0.0001220703125 INFO: onnx_output_name: wa/decoder1/dec1relu2/Relu_output_0 tf_output_name: tf.nn.relu_17/Relu:0 shape: (1, 32, 256, 256) dtype: float32 validate_result: Unmatched max_abs_error: 0.0001220703125 INFO: onnx_output_name: wa/conv/Conv_output_0 tf_output_name: tf.math.add_26/Add:0 shape: (1, 3, 256, 256) dtype: float32 validate_result: Unmatched max_abs_error: 0.000396728515625 INFO: onnx_output_name: output tf_output_name: tf.math.sigmoid/Sigmoid:0 shape: (1, 3, 256, 256) dtype: float32 validate_result: Matches ```
PINTO0309 commented 1 week ago

It is safe to ignore errors of less than 1e-4 for (Unmatched). Therefore, the model transformation is completely successful when the error in the final output output is zero (Matches).

onnx2tf -i Unet.onnx -cotof -oiqt

image

The reason why the inference accuracy seems to deteriorate significantly when you use the generated model is because there is a bug in the logic you wrote. There are already a large number of similar issues reported in the issue tracker, so please search the issue tracker carefully.