PINTO0309 / openvino2tensorflow

This script converts the ONNX/OpenVINO IR model to Tensorflow's saved_model, tflite, h5, tfjs, tftrt(TensorRT), CoreML, EdgeTPU, ONNX and pb. PyTorch (NCHW) -> ONNX (NCHW) -> OpenVINO (NCHW) -> openvino2tensorflow -> Tensorflow/Keras (NHWC/NCHW) -> TFLite (NHWC/NCHW). And the conversion from .pb to saved_model and from saved_model to .pb and from .pb to .tflite and saved_model to .tflite and saved_model to onnx. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support.
MIT License
334 stars 40 forks source link

Add disable_per_channel flag to the tool #114

Closed zye1996 closed 2 years ago

zye1996 commented 2 years ago

Per-tensor quantization is required for the tflite model to speed up on certain devices, such as A311D. Starting Tensorflow 2.8, the flag is supported to turn off the default per-channel quantization and use per-tensor quantization instead.

PINTO0309 commented 2 years ago

LGTM