PINTO0309 / tflite2tensorflow

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.
https://qiita.com/PINTO
MIT License
258 stars 38 forks source link

Order of input channels switched on ONNX #26

Closed hovnatan closed 2 years ago

hovnatan commented 2 years ago

Issue Type

Others

OS

Mac OS

OS architecture

aarch64

Programming Language

C++

Framework

TensorFlowLite

Download URL for tflite file

https://github.com/google/mediapipe/blob/master/mediapipe/modules/face_detection/face_detection_short_range.tflite

Convert Script

tflite2tensorflow --model_path face_detection_short_range.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_pb
tflite2tensorflow --model_path face_detection_short_range.tflite --flatc_path ../flatc --schema_path ../schema.fbs --output_onnx --onnx_opset 9

Description

The input to the TFlite model is 1x128x128x3 but it is switched to 1x3x128x128 on ONNX output.

Relevant Log Output

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
inputs:
{'dtype': <class 'numpy.float32'>,
 'index': 0,
 'name': 'input',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1, 128, 128,   3], dtype=int32),
 'shape_signature': array([  1, 128, 128,   3], dtype=int32),
 'sparsity_parameters': {}}
outputs:
{'dtype': <class 'numpy.float32'>,
 'index': 175,
 'name': 'regressors',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1, 896,  16], dtype=int32),
 'shape_signature': array([  1, 896,  16], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 174,
 'name': 'classificators',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1, 896,   1], dtype=int32),
 'shape_signature': array([  1, 896,   1], dtype=int32),
 'sparsity_parameters': {}}
ONNX convertion started =============================================================

ONNX convertion complete! - saved_model/model_float32.onnx

Source code for simple inference testing code

No response

PINTO0309 commented 2 years ago

@hovnatan Is your desired configuration the one in the image below? image

PINTO0309 commented 2 years ago

--disable_onnx_nchw_conversion has been added. commit: https://github.com/PINTO0309/tflite2tensorflow/commit/ad2146f50266b3097266cd94161bd68d891575d7 https://github.com/PINTO0309/tflite2tensorflow/commit/00d714d92cf94dc4f005005f0425edd8a03f103d https://github.com/PINTO0309/tflite2tensorflow/releases/tag/v1.18.4