PINTO0309 / onnx2tf

Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The purpose of this tool is to solve the massive Transpose extrapolation problem in onnx-tensorflow (onnx-tf). I don't need a Star, but give me a pull request.
MIT License
706 stars 73 forks source link

add output_saved_model flag #717

Open kwikwag opened 1 month ago

kwikwag commented 1 month ago

1. Content and background

Added output_saved_model flag to allow one to optionally only save TFLite models. The motivation is that on certain models with grouped convolution (namely YOLOv6-Lite) the saved model output fails in a way that prevents conversion from proceeding to the TFLite models. The flag is a simple way to circumvent this.

2. Summary of corrections

A simple if/else.

3. Before/After (If there is an operating log that can be used as a reference)

TBD

kwikwag commented 1 month ago

Done. It was a good call too because I noticed that all flags are False by default so I adjusted mine to be so too.

BTW interestingly when running via the CLI the SavedModel is saved successfully. I suspect this is because in the CLI, --output_signaturedefs has a default value of True (which actually kinds of beats the purpose of action='store_true') -- but the default for the convert() function is output_signaturedefs=False...

Also I'm adding the diff for the export log, with/without the flag:

> WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
> W0000 00:00:1729826158.909244   10695 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
> W0000 00:00:1729826158.909296   10695 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
> WARNING: If you want tflite input OP name and output OP name to match ONNX input and output names, convert them after installing "flatc".  Also, do not use symbols such as slashes in input/output OP names. To install flatc, run the following command:
> wget https://github.com/PINTO0309/onnx2tf/releases/download/1.16.31/flatc.tar.gz && tar -zxvf flatc.tar.gz && sudo chmod +x flatc && sudo mv flatc /usr/bin/
> Float32 tflite output complete!
> W0000 00:00:1729826161.558175   10695 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
> W0000 00:00:1729826161.558215   10695 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
> WARNING: If you want tflite input OP name and output OP name to match ONNX input and output names, convert them after installing "flatc".  Also, do not use symbols such as slashes in input/output OP names. To install flatc, run the following command:
> wget https://github.com/PINTO0309/onnx2tf/releases/download/1.16.31/flatc.tar.gz && tar -zxvf flatc.tar.gz && sudo chmod +x flatc && sudo mv flatc /usr/bin/
> Float16 tflite output complete!
---
< saved_model output started ==========================================================
< ERROR: Generation of saved_model failed because the OP name does not match the following pattern. ^[A-Za-z0-9.][A-Za-z0-9_.\\/>-]*$       
< ERROR: /backbone/lite_effiblock_4/lite_effiblock_4.0/conv_dw_1/block/conv/Conv/kernel
< ERROR: Please convert again with the `-osd` or `--output_signaturedefs` option.
PINTO0309 commented 4 weeks ago

There doesn't seem to be a problem with the fix itself.

Since grouped convolution can be output to saved_model from TensorFlow v2.17.0, I did not set this flag.

First off, we need to be clear: what version of TensorFlow are you using? The TensorFlow API has undergone significant changes since v2.15.0 to support Keras V3, so I don't want to support older versions.

kwikwag commented 4 weeks ago

I was using v2.16.2. When I tried exporting with v2.17 I had problems using it in the TFLite Flutter plugin, which is based on v2.11. The main idea behind this flag is not to fix TF version support, but rather work around it in cases where TFLite output is possible. Since they are constructed in an independent manner, it seems reasonable to have flags to control this behavious.