PINTO0309 / openvino2tensorflow

This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.
MIT License
334 stars 40 forks source link

Refactoring for use as importable package #130

Closed lillekemiker closed 1 year ago

lillekemiker commented 1 year ago

Issue Type

Feature Request

OS

Other

OS architecture

Other

Programming Language

Python

Framework

PyTorch

Download URL for ONNX / OpenVINO IR

N/A

Convert Script

openvino2tensorflow

Description

First of all, thank you for your fantastic work on this. It blows my mind that yours is the only solution out there for converting NCWH to NWHC during conversion to tensorflow. Or at least, I have not been able to find any other solutions.

Considering that your code is regular python, a point of frustration for me, though, is that I am not able to simply import it into to my own code and run the parts I need and switch out parts that I need to work differently.

Would you be interested in refactoring your code and making it an importable package? I would happily volunteer my time to help out with this if you want. An added bonus would be that you can add unit testing and similar much more easily

Relevant Log Output

ModuleNotFoundError: No module named 'openvino2tensorflow'

Source code for simple inference testing code

import openvino2tensorflow

PINTO0309 commented 1 year ago

Thanks for the suggestion.

I would like to refactor the code. For example, as you suggest, I would like to add an interface to call from Script or clean up the redundant and buggy code. However, I maintain a lot of other OSS and cannot allocate enough time to work on replacing this tool right now.

My main focus and maintenance right now is this tool that directly processes ONNX. https://github.com/PINTO0309/simple-onnx-processing-tools

I understand that using onnx-tf will significantly degrade the performance of the TensorFlow model due to the large amount of useless Transpose OPs extrapolated. However, since I can see no real advantage other than running TensorFlow Lite on Android Phone, I am considering onnxruntime as my mainstay runtime, noting that it offers significantly greater operational flexibility by switching between various backends.

e.g.

Therefore, it is very hard for me to re-enhance this tool by myself, which has become bloated after two years of continually enhancing its functions little by little.

Pull requests that improve the usefulness of the tool are very welcome. However, I think it will take a very long time to review. And now I feel it would be much more efficient to create my own onnx-tf, which is different from the onnx-tf maintained by Microsoft.

lillekemiker commented 1 year ago

My use case is actually running TFLite on an Android (and iOS) phone. Is there currently a different workflow you would recommend for this?

PINTO0309 commented 1 year ago

https://github.com/PINTO0309/onnx2tf image