Closed WildTaras closed 1 month ago
onnx2tf -i yolov8n_dynamic_true_batch_size_2.onnx -ois images:2,3,640,640
I tried it. This command doesnt provide full_integer model. What should I add to get it?
I run this command onnx2tf -i yolov8n_dynamic_true_batch_size_2.onnx -ois images:2,3,640,640 -oiqt -ioqd uint8 and get same problem, as Idescribed in the beginning Traceback (most recent call last): File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\onnx2tf\onnx2tf.py", line 1525, in convert tflite_model = converter.convert() File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1139, in wrapper return self._convert_and_export_metrics(convert_func, *args, kwargs) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1093, in _convert_and_export_metrics result = convert_func(self, *args, *kwargs) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1465, in convert return self._convert_from_saved_model(graph_def) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1332, in _convert_from_saved_model return self._optimize_tflite_model( File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 215, in wrapper raise error from None # Re-throws the exception. File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 205, in wrapper return func(args, kwargs) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 1037, in _optimize_tflite_model model = self._quantize( File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\lite.py", line 735, in _quantize calibrated = calibrate_quantize.calibrate( File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 215, in wrapper raise error from None # Re-throws the exception. File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 205, in wrapper return func(*args, **kwargs) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\optimize\calibrator.py", line 254, in calibrate self._feed_tensors(dataset_gen, resize_input=True) File "C:\Users\khale.conda\envs\ml_env_test\lib\site-packages\tensorflow\lite\python\optimize\calibrator.py", line 143, in _feed_tensors self._calibrator.Prepare([list(s.shape) for s in input_array]) RuntimeError: tensorflow/lite/kernels/reshape.cc:92 num_input_elements != num_output_elements (57600 != 115200)Node number 190 (RESHAPE) failed to prepare.
Seriously, read the README. It's a pain.
Issue Type
Others
OS
Linux
onnx2tf version number
1.17.5
onnx version number
1.15.0
onnxruntime version number
1.16.3
onnxsim (onnx_simplifier) version number
1.16.3
tensorflow version number
2.15.0
Download URL for ONNX
yolov8n.zip
Parameter Replacement JSON
Description
ERROR: input_onnx_file_path: yolov8n.onnx ERROR: onnx_op_name: wa/model.9/m/MaxPool ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again. ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option. ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.
My aim is to convert yolov8n .pt model into .tflite with batch_size=2 in order to run inference with 2 images simultaneously. I showed errors, which I got just to show, that I didnt find a way. Could you please show me proper way of converting with batch_size 2 I read 9. INT8 quantization of models with multiple inputs requiring non-image data, but flags -osd and -cotof didnt help me