PINTO0309 / onnx2tf

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
MIT License
708 stars 73 forks source link

[YOLOv7] None in graph_node_input.shape #694

Closed MichaelMonashev closed 1 month ago

MichaelMonashev commented 2 months ago

Issue Type

Others

OS

Linux

onnx2tf version number

1.25.12

onnx version number

1.16.2

onnxruntime version number

1.19.2

onnxsim (onnx_simplifier) version number

0.4.36

tensorflow version number

2.17.0

Download URL for ONNX

https://drive.google.com/file/d/1PlXNBPGgNLyy-MKJJRe-yzakSZtJCtT6/view?usp=sharing

Parameter Replacement JSON

{
  "format_version": 1,
  "operations": []
}

Description

$ onnx2tf -i yolov7x.onnx

...

ERROR: The trace log is below.
Traceback (most recent call last):
  File "/home/xxx/.local/lib/python3.12/site-packages/onnx2tf/utils/common_functions.py", line 312, in print_wrapper_func
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/xxx/.local/lib/python3.12/site-packages/onnx2tf/utils/common_functions.py", line 385, in inverted_operation_enable_disable_wrapper_func
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/xxx/.local/lib/python3.12/site-packages/onnx2tf/utils/common_functions.py", line 55, in get_replacement_parameter_wrapper_func
    func(*args, **kwargs)
  File "/home/xxx/.local/lib/python3.12/site-packages/onnx2tf/ops/ArgMax.py", line 69, in make_node
    tensor_rank=len(graph_node_input.shape),
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: object of type 'NoneType' has no len()

ERROR: input_onnx_file_path: yolov7x.onnx
ERROR: onnx_op_name: wa/end2end/ArgMax
ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement
ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again.
ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option.
ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.
MichaelMonashev commented 2 months ago

Command to genarate yolov7x.onnx: python3 export.py --weights yolov7x.pt --grid --end2end --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 --max-wh 640 --dynamic-batch

Error occur with --dynamic-batch argument only. With fixed batch size onnx2tf works without errors.

export.py from https://github.com/WongKinYiu/yolov7/ yolov7x.pt from https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7x.pt

PINTO0309 commented 2 months ago

The model structure is broken even before converting onnx with onnx2tf. This is clearly a bug in the implementation of yolov7's --dynamic-batch. First of all, when using your onnx, multiple structural errors occur during onnxruntime inference.

image

image

image

This is the correct YOLOv7 ONNX structure. https://github.com/PINTO0309/PINTO_model_zoo/tree/main/307_YOLOv7

image

There is another big problem with this post-processing: NonMaxSuppression mutable batches cannot be converted at all because there is no corresponding operation in TFLite/TensorFlow.

image

PINTO0309 commented 2 months ago

ArgMax, Flatten Fix: https://github.com/PINTO0309/onnx2tf/releases/tag/1.25.13

PINTO0309 commented 2 months ago

ArgMin Fix: https://github.com/PINTO0309/onnx2tf/releases/tag/1.25.14

MichaelMonashev commented 2 months ago

@PINTO0309 , сan you suggest pytorch models 1-4 year old which can train with my custom dataset, my custom augmentation and my custom training loop, and after converted to tflite and coreml formats with dynamic batch size, with NMS and my custom prepocessing (resize() + stack some images to one batch) ?

PINTO0309 commented 2 months ago

after converted to tflite and coreml formats with dynamic batch size, with NMS and my custom prepocessing (resize() + stack some images to one batch) ?

As mentioned earlier, only variable batch NMS cannot be converted. So, if you want to implement NMS on the logic side of the model rather than inside it, as far as I know, most of the past models can be converted. However, I do not recommend TorchVision's R-CNN because its structure is broken.

Since you haven't shared any information about the hardware/devices you ultimately plan to deploy on, it's really hard to know what to suggest.

MichaelMonashev commented 2 months ago

I plan to use the object detection model on Android and iPhone devices no older than 4 years old.

PINTO0309 commented 2 months ago

There are almost all models here, try them all until you find what you want.

https://github.com/open-mmlab/mmdetection

github-actions[bot] commented 1 month ago

If there is no activity within the next two days, this issue will be closed automatically.