PINTO0309 / onnx2tf

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
MIT License
706 stars 73 forks source link

[EfficientNetV2_m] Output size is constant not variable #688

Closed Eddudos closed 2 months ago

Eddudos commented 2 months ago

Issue Type

Others

OS

Linux

onnx2tf version number

1.25.9

onnx version number

1.16.1

onnxruntime version number

1.18.1

onnxsim (onnx_simplifier) version number

0.4.33

tensorflow version number

2.17.0

Download URL for ONNX

https://drive.google.com/file/d/1p1NK1Y5AZi2jpKJ8SumLZ-3WSSEGJb5u/view?usp=sharing

Parameter Replacement JSON

NA

Description

  1. Personal development
  2. I'm probably doing something wrong, could you help me convert my model proopely. My ONNX model outputs: image

I've converted ONNX to tf as: !onnx2tf -i weights/onnx/efficientnet_v2_m.onnx \ -o weights/tf/efficientnet_v2_m --non_verbose

My saved_model.pb:

saved_model_cli show --dir notebooks/weights/tf/tflite/ --all

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']:
  The given SavedModel SignatureDef contains the following input(s):
  The given SavedModel SignatureDef contains the following output(s):
    outputs['__saved_model_init_op'] tensor_info:
        dtype: DT_INVALID
        shape: unknown_rank
        name: NoOp
  Method name is: 

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['input'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 480, 480, 3)
        name: serving_default_input:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['output_0'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 19)
        name: PartitionedCall:0
  Method name is: tensorflow/serving/predict
The MetaGraph with tag set ['serve'] contains the following ops: {'Relu', 'Sigmoid', 'Identity', 'StringJoin', 'AddV2', 'DepthwiseConv2dNative', 'Conv2D', 'Mul', 'NoOp', 'StatefulPartitionedCall', 'Pack', 'RestoreV2', 'Pad', 'Const', 'PartitionedCall', 'Mean', 'SaveV2', 'MergeV2Checkpoints', 'StaticRegexFullMatch', 'Transpose', 'ShardedFilename', 'Reshape', 'Placeholder', 'Select', 'MatMul'}

I see there is constant output size of 1 instead of -1. I've also tried to inference on tf model on batch of images with shape (3, 480, 480, 3) and got

InvalidArgumentError: {{function_node __inference_signature_wrapper_<lambda>_16796}} Matrix size-incompatible: In[0]: [1,3840], In[1]: [1280,256]
     [[{{node PartitionedCall/model/tf.linalg.matmul/MatMul}}]] [Op:__inference_signature_wrapper_<lambda>_16796]

But it works correctly with a single image (1, 480, 480, 3)

  1. I've read readme, haven't found my model in Validated models and couldn't figure out if I need to make my own replacement.json. Not sure if it's related to my issue. Tried several other ways to convert my ONNX model from ISSUE-441 via local env and docker and colab, but model seems to be the same, so no luck. Just for the mention, my tflite model works fine with batches, so I'm not sure what is the problem here. I'm thinking that it's problem with GEMMs static shape, but not sure. Help needed.
PINTO0309 commented 2 months ago
MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['__saved_model_init_op']:
  The given SavedModel SignatureDef contains the following input(s):
  The given SavedModel SignatureDef contains the following output(s):
    outputs['__saved_model_init_op'] tensor_info:
        dtype: DT_INVALID
        shape: unknown_rank
        name: NoOp
  Method name is: 

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['input'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 480, 480, 3)
        name: serving_default_input:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['output_0'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 19)
        name: PartitionedCall:0
  Method name is: tensorflow/serving/predict
PINTO0309 commented 2 months ago

Fix: https://github.com/PINTO0309/onnx2tf/releases/tag/1.25.10

Eddudos commented 2 months ago

Thank you, it simply works now!