Closed hadhoryth closed 6 months ago
My model has 6 channels as inputs and I also see you have the issue here https://github.com/PINTO0309/onnx2tf/issues/411 So I think it is related
Share the link to the onnx. It is very tedious to generate it myself.
Sure, here is the link to gdrive
Fixed a problem in the conversion process of GatherElements
and upgraded to the new version.
Update: https://github.com/PINTO0309/onnx2tf/releases/tag/1.21.2
pip install onnx2tf -U
For now, please use the following tutorial for quantization for tensors other than 3-channel (RGB). This is because it is impossible for onnx2tf to determine whether the input data type is image, voice, or other sensor data from the ONNX input shape alone. The calibration data for INT8 quantization must be some data used for training, but the tool does not know what that data is.
Even if ConvNext-Det's input were 6 channels of RGBRGB, it would not be known from onnx2tf.
@PINTO0309 Thank you for the incredibly fast update to the codebase. The conversion went through, but I still can not see the .pb
files. Does it work for you?
Name: onnx2tf
Version: 1.21.2
I want to start with this: https://github.com/PINTO0309/onnx2tf?tab=readme-ov-file#9-int8-quantization-of-models-with-multiple-inputs-requiring-non-image-data
So I run onnx2tf
to generate the pb file
onnx2tf -i convnext_det.onnx -cotof -n -osd -rtpo Gelu
but my saved_model
does not have it, only (convnext_det_float16.tflite
, convnext_det_float32.tflite
)
You have missed the warning message at the beginning of the model's conversion log.
WARNING: This model contains GroupConvolution and is automatically optimized for TFLite, but is not output because saved_model does not support GroupConvolution. If saved_model is needed, specify --disable_group_convolution to retransform the model.
onnx2tf -i convnext-det.onnx -cotof -osd -dgc
ls -l saved_model
drwxr-xr-x 2 xxxxx xxxxx 1048576 5月 15 18:10 assets
-rwxr-xr-x 1 xxxxx xxxxx 8094560 5月 15 18:10 convnext-det_float16.tflite
-rwxr-xr-x 1 xxxxx xxxxx 16072900 5月 15 18:10 convnext-det_float32.tflite
-rwxr-xr-x 1 xxxxx xxxxx 54 5月 15 18:10 fingerprint.pb
-rwxr-xr-x 1 xxxxx xxxxx 16861549 5月 15 18:10 saved_model.pb
drwxr-xr-x 2 xxxxx xxxxx 1048576 5月 15 18:10 variables
You are right! It worked for me! Thank you!
Fix: onnx2tf>=1.24.0
# Keras API 3, Convertible without `-dgc`.
pip install -U tf-keras~=2.16
pip install -U tensorflow>=2.17.0
pip install -U onnx2tf
onnx2tf -i convnext-det.onnx -cotof -osd
Issue Type
Documentation Feature Request
OS
Linux
onnx2tf version number
1.21.1
onnx version number
1.15.0
onnxruntime version number
1.17.1
onnxsim (onnx_simplifier) version number
0.4.33
tensorflow version number
2.16.1
Download URL for ONNX
convnext-det.onnx.zip
Parameter Replacement JSON
Description
I have a convnext model and want to generate quantized version of it.
Command what I'm using:
onnx2tf -i convnext-det.onnx -cotof -rtpo Gelu -n -coion -osd
The conversion to the the tflite_32 and tflite_16 worked but the
.pb
files are not there so quantized model is not generated.Maybe you can suggest what I'm doing wrong.