meituan / YOLOv6

YOLOv6: a single-stage object detection framework dedicated to industrial applications.
GNU General Public License v3.0
5.72k stars 1.04k forks source link

Convert trained model (YOLOv6s v2.1) into Tensorflow using ONNX-TF 1.12 fails during inference #594

Closed marsousi closed 2 years ago

marsousi commented 2 years ago

Before Asking

Search before asking

Question

First of all, I would like to thank you for updating YOLOv6. It is now very stable and accurate (far better than YOLOv7 and YOLOv5). We trained a new model based on the small model config for a custom dataset with a single object, and used the Deploy/OpenCV/export_onnx.py to generate the onnx model. Then, we used ONNX-TF 1.10 to convert it into Tensorflow saved model, and then freeze it into a .pb file. This process was successfully working for the previous YOLOv6 versions (v2.0 and v1.0). However, it looks like new ops have been used in the new version which is not supported. For example, to make ONNX_TF working, I added Squeeze and Unsqueeze support for version 13 as per the follow link instructed:

https://github.com/onnx/onnx-tensorflow/pull/1022/files

I understand you provided the framework for PyTorch, ONNX, OpenVINO, and TensorRT, but your help here would expand its usability in other platforms.

Additional

No response

mtjhl commented 2 years ago

Thank you for your advice. When add Squeeze and Unsqueeze in your pull to onnx-tensorflow, is it working normally in inference?

marsousi commented 2 years ago

It allows to convert the .onnx file into TF saved model format. Saved model format runs. But once freezing into .pb fails. I repeated everything but changing opset to version 12 in the export_onnx.py under deploy/ONNX/OpenCV as follows:

torch.onnx.export(model, img, f, verbose=False, opset_version=12, training=torch.onnx.TrainingMode.EVAL, do_constant_folding=True, input_names=['images'], output_names=['num_dets', 'det_boxes', 'det_scores', 'det_classes'] if args.end2end else ['outputs'], dynamic_axes=dynamic_axes)

Other info: TF 2.10, ONNX 1.12, ONNX-TF 1.10

Now, everything works. I think TF does not fully support all opset_version 13 yet.