PINTO0309 / onnx2tf

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
MIT License
708 stars 73 forks source link

Problem of yolov8 convertion using dynamic batch_size or static batch_size=2 #713

Closed WildTaras closed 1 month ago

WildTaras commented 1 month ago

Issue Type

Others

OS

Linux

onnx2tf version number

1.17.5

onnx version number

1.15.0

onnxruntime version number

1.16.3

onnxsim (onnx_simplifier) version number

1.16.3

tensorflow version number

2.15.0

Download URL for ONNX

yolov8n.zip

Parameter Replacement JSON

Sorry, I didnt get an idea how to retrieve it.

Description

  1. I would like to convert yolov8n from pytorch to tflite.
  2. I attached onnx models, where I exported it with different options:
    • yolov8n_dynamic_false_batch_size_2.onnx means I export .pt model into onnx WITHOUT dynamic export with setting batch_size to 2 This way brings me an error: Traceback (most recent call last): File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\onnx2tf\onnx2tf.py", line 1494, in convert tflite_model = converter.convert() File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1139, in wrapper return self._convert_and_export_metrics(convert_func, *args, kwargs) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1093, in _convert_and_export_metrics result = convert_func(self, *args, *kwargs) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1465, in convert return self._convert_from_saved_model(graph_def) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1332, in _convert_from_saved_model return self._optimize_tflite_model( File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 215, in wrapper raise error from None # Re-throws the exception. File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 205, in wrapper return func(args, kwargs) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1037, in _optimize_tflite_model model = self._quantize( File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 735, in _quantize calibrated = calibrate_quantize.calibrate( File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 215, in wrapper raise error from None # Re-throws the exception. File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 205, in wrapper return func(*args, **kwargs) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\optimize\calibrator.py", line 254, in calibrate self._feed_tensors(dataset_gen, resize_input=True) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\optimize\calibrator.py", line 143, in _feed_tensors self._calibrator.Prepare([list(s.shape) for s in input_array]) RuntimeError: tensorflow/lite/kernels/reshape.cc:92 num_input_elements != num_output_elements (57600 != 115200)Node number 198 (RESHAPE) failed to prepare.

ERROR: input_onnx_file_path: yolov8n.onnx ERROR: onnx_op_name: wa/model.9/m/MaxPool ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again. ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option. ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.

My aim is to convert yolov8n .pt model into .tflite with batch_size=2 in order to run inference with 2 images simultaneously. I showed errors, which I got just to show, that I didnt find a way. Could you please show me proper way of converting with batch_size 2 I read 9. INT8 quantization of models with multiple inputs requiring non-image data, but flags -osd and -cotof didnt help me

PINTO0309 commented 1 month ago
onnx2tf -i yolov8n_dynamic_true_batch_size_2.onnx -ois images:2,3,640,640

image

WildTaras commented 1 month ago

I tried it. This command doesnt provide full_integer model. What should I add to get it?

PINTO0309 commented 1 month ago

https://github.com/PINTO0309/onnx2tf?tab=readme-ov-file#8-calibration-data-creation-for-int8-quantization

WildTaras commented 1 month ago

I run this command onnx2tf -i yolov8n_dynamic_true_batch_size_2.onnx -ois images:2,3,640,640 -oiqt -ioqd uint8 and get same problem, as Idescribed in the beginning Traceback (most recent call last): File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\onnx2tf\onnx2tf.py", line 1525, in convert tflite_model = converter.convert() File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1139, in wrapper return self._convert_and_export_metrics(convert_func, *args, kwargs) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1093, in _convert_and_export_metrics result = convert_func(self, *args, *kwargs) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1465, in convert return self._convert_from_saved_model(graph_def) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1332, in _convert_from_saved_model return self._optimize_tflite_model( File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 215, in wrapper raise error from None # Re-throws the exception. File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 205, in wrapper return func(args, kwargs) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1037, in _optimize_tflite_model model = self._quantize( File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 735, in _quantize calibrated = calibrate_quantize.calibrate( File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 215, in wrapper raise error from None # Re-throws the exception. File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 205, in wrapper return func(*args, **kwargs) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\optimize\calibrator.py", line 254, in calibrate self._feed_tensors(dataset_gen, resize_input=True) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\optimize\calibrator.py", line 143, in _feed_tensors self._calibrator.Prepare([list(s.shape) for s in input_array]) RuntimeError: tensorflow/lite/kernels/reshape.cc:92 num_input_elements != num_output_elements (57600 != 115200)Node number 190 (RESHAPE) failed to prepare.

PINTO0309 commented 1 month ago

Seriously, read the README. It's a pain.