PINTO0309 / onnx2tf

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
MIT License
662 stars 65 forks source link

[face_recognition_sface] [Query][tflite]The tool or cmdline to support NCHW between NHWC #10

Closed joyoki closed 1 year ago

joyoki commented 1 year ago

Issue Type

Others

onnx2tf version number

1.0.48

Download URL for ONNX

Model: 1.face_recognition_sface_2021dec.onnx 2.https://github.com/opencv/opencv_zoo/blob/master/models/face_recognition_sface/face_recognition_sface_2021dec-act_int8-wt_int8-quantized.onnx

Parameter Replacement JSON

NA

Description

Hi @PINTO0309 Thanks for your job and it working when ONNX covert to tflite without quantizion. for quantized ONNX model or tflite model, the tool seems not working for belwo error ERROR: QLinearMul OP is not yet implemented.

so i would like to check with you whether the tool support NCHW between NHWC convertion for tflite model?

Thanks

PINTO0309 commented 1 year ago

It is still under development. I am in the process of implementing DequantizeLinear as hard as I can. image image

I have something I would like you to tell me to motivate me. Why bother converting INT8 quantized models? If I may, I would like to know your motivation.

joyoki commented 1 year ago

Hi @PINTO0309 Thanks for your quick confirm. actually, I would like to compare the model result bettween face_recognition_sface_2021dec.onnx and face_recognition_sface_2021dec-act_int8-wt_int8-quantized.onnx. while quantized function is not working before v1.048. the quantized model face_recognition_sface_2021dec-act_int8-wt_int8-quantized.onnx could not convert to .tflite filetype.

that's why i would like to know if had cmdline to convert NCHW to NHWC for tflite when I face question when quantied onnx to tflite

Thanks

PINTO0309 commented 1 year ago

Thanks. I am working on an implementation of the Qxxx OP around quantization, I just started looking into implementing it about an hour ago so it will still take a few more days.

If you only need a temporary comparison of accuracy rather than speed, you can use this tool for now to convert and compare. https://github.com/onnx/onnx-tensorflow

joyoki commented 1 year ago

Good Job, waiting for your good news!

PINTO0309 commented 1 year ago

Ref: https://github.com/PINTO0309/onnx2tf/commit/247ed0120064b586c27e44b06284dfe067ac21dc https://github.com/PINTO0309/onnx2tf/commit/69cd24b9a34811d9aa44e6809acd58e2301d3390 https://github.com/PINTO0309/onnx2tf/commit/efc49e1e60e4fd11b5f78bfa72f866eac24517b5 https://github.com/PINTO0309/onnx2tf/commit/8954da065c3674e137a5d5961f448a4fc86523f6 https://github.com/PINTO0309/onnx2tf/compare/1.1.6...1.1.7 Note: A large number of non-standard OP opsets for ONNX have been incorporated.

com.microsoft v1
com.microsoft.nchwc v1
ai.onnx.training v1
ai.onnx.preview.training v1
com.microsoft.experimental v1

image

joyoki commented 1 year ago

Ref: 247ed01 69cd24b efc49e1 8954da0 Note: A large number of non-standard OP opsets for ONNX have been incorporated.

com.microsoft v1
com.microsoft.nchwc v1
ai.onnx.training v1
ai.onnx.preview.training v1
com.microsoft.experimental v1

image

Thanks for your good job first. I saw you already release V1.1.6. I had update with the latest version I had try the V1.1.6, seems the error of "ERROR: QLinearMul OP is not yet implemented."

keep your moving. Good news is worh of waiting.

PINTO0309 commented 1 year ago

Only a very limited number of OPs can be reverse-quantified. No check is made to ensure that the output is correct or that no accuracy degradation has occurred.

joyoki commented 1 year ago

Only a very limited number of OPs can be reverse-quantified. No check is made to ensure that the output is correct or that no accuracy degradation has occurred.

I had checked and verified the tflite file:model_float32.tflite.zip. it works in the APK. could you help share the convertion cmdline for face_recognition_sface_2021dec-act_int8-wt_int8-quantized.onnx?

Thanks a lot

PINTO0309 commented 1 year ago

In the process of adding more functionality than you wanted, there seemed to be another bug embedded, which I have fixed. Simply upgrade to the latest version and convert using the following command. https://github.com/PINTO0309/onnx2tf/releases/tag/1.1.13

pip install onnx2tf -U
onnx2tf -i face_recognition_sface_2021dec-act_int8-wt_int8-quantized.onnx
joyoki commented 1 year ago

onnx2tf -i face_recognition_sface_2021dec-act_int8-wt_int8-quantized.onnx

Yes, it works now after upgraded to V1.1.13 I come across the bug as you said which i am not sure, so I didn't post .:

ERROR: The trace log is below.
Traceback (most recent call last):
  File "/home/rsi/anaconda3/lib/python3.9/site-packages/onnx2tf/utils/common_functions.py", line 261, in print_wrapper_func
    result = func(*args, **kwargs)
  File "/home/rsi/anaconda3/lib/python3.9/site-packages/onnx2tf/utils/common_functions.py", line 336, in inverted_operation_enable_disable_wrapper_func
    onnx_node_output_shape = [None if s is not None and not isinstance(s, str) and s < 1 else s for s in onnx_node_output_shape]
TypeError: 'NoneType' object is not iterable
ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement

Thanks again

PINTO0309 commented 1 year ago

Yes, I am aware of that. That is exactly the part of the new bug you have created by adding a different conversion feature than the one you were hoping for. I have fixed that bug.

The speed at which I add features is too fast, so regression testing tends to be inadequate.

github-actions[bot] commented 1 year ago

If there is no activity within the next two days, this issue will be closed automatically.