PINTO0309 / tflite2tensorflow

Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite, ONNX, OpenVINO, Myriad Inference Engine blob and .pb from .tflite. Support for building environments with Docker. It is possible to directly access the host PC GUI and the camera to verify the operation. NVIDIA GPU (dGPU) support. Intel iHD GPU (iGPU) support. Supports inverse quantization of INT8 quantization model.
https://qiita.com/PINTO
MIT License
262 stars 41 forks source link

TypeError: Interpreter._get_tensor_details() missing 1 required positional argument: 'subgraph_index' #42

Closed Sukeysun closed 4 months ago

Sukeysun commented 4 months ago

Issue Type

Bug

OS

Ubuntu

OS architecture

x86_64

Programming Language

Python

Framework

ONNX

Download URL for tflite file

https://storage.googleapis.com/mediapipe-models/pose_landmarker/pose_landmarker_full/float16/latest/pose_landmarker_full.task unzip pose_landmarker_full.task

Convert Script

 tflite2tensorflow \
  --model_path ./pose_detector.tflite \
  --flatc_path ../flatbuffers/build/flatc \
  --schema_path ../schema.fbs \
  --model_output_path pose_detection \
  --output_pb

Description

I want to convert the pose_detector.tflite file to onnx file. I follow the step in readme: Step 1 : Generating saved_model and FreezeGraph (.pb)

 tflite2tensorflow \
  --model_path ./pose_detector.tflite \
  --flatc_path ../flatbuffers/build/flatc \
  --schema_path ../schema.fbs \
  --model_output_path pose_detection \
  --output_pb

Relevant Log Output

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: Placeholder
{'dtype': <class 'numpy.float32'>,
 'index': 0,
 'name': 'input_1',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([  1, 224, 224,   3], dtype=int32),
 'shape_signature': array([  1, 224, 224,   3], dtype=int32),
 'sparsity_parameters': {}}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: PAD
{'builtin_options': {},
 'builtin_options_type': 'PadOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [0, 1],
 'opcode_index': 0,
 'outputs': [2]}
------------ 1
Traceback (most recent call last):
  File "/home/ai-server/.local/bin/tflite2tensorflow", line 819, in make_graph
    paddings_array = tensors[op['inputs'][1]]
                     ~~~~~~~^^^^^^^^^^^^^^^^^
KeyError: 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ai-server/.local/bin/tflite2tensorflow", line 6615, in <module>
    main()
  File "/home/ai-server/.local/bin/tflite2tensorflow", line 5882, in main
    TFLite_Detection_PostProcess_flg = make_graph(
                                       ^^^^^^^^^^^
  File "/home/ai-server/.local/bin/tflite2tensorflow", line 822, in make_graph
    paddings_detail = interpreter._get_tensor_details(op['inputs'][1])
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Interpreter._get_tensor_details() missing 1 required positional argument: 'subgraph_index'


### Source code for simple inference testing code

_No response_
PINTO0309 commented 4 months ago
docker run -it --rm \
-v `pwd`:/home/user/workdir \
ghcr.io/pinto0309/tflite2tensorflow:latest
tflite2tensorflow \
--model_path ./pose_detector.tflite \
--flatc_path ../flatc \
--schema_path ../schema.fbs \
--output_pb
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: RESHAPE
{'builtin_options_type': 'NONE',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [439, 392],
 'opcode_index': 8,
 'outputs': [440]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ op: CONCATENATION
{'builtin_options': {'axis': 1, 'fused_activation_function': 'NONE'},
 'builtin_options_type': 'ConcatenationOptions',
 'custom_options_format': 'FLEXBUFFERS',
 'inputs': [393, 416, 440],
 'opcode_index': 9,
 'outputs': [441]}
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
outputs:
{'dtype': <class 'numpy.float32'>,
 'index': 441,
 'name': 'Identity',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([   1, 2254,   12], dtype=int32),
 'shape_signature': array([   1, 2254,   12], dtype=int32),
 'sparsity_parameters': {}}
{'dtype': <class 'numpy.float32'>,
 'index': 429,
 'name': 'Identity_1',
 'quantization': (0.0, 0),
 'quantization_parameters': {'quantized_dimension': 0,
                             'scales': array([], dtype=float32),
                             'zero_points': array([], dtype=int32)},
 'shape': array([   1, 2254,    1], dtype=int32),
 'shape_signature': array([   1, 2254,    1], dtype=int32),
 'sparsity_parameters': {}}
TensorFlow/Keras model building process complete!
saved_model / .pb output started ====================================================
saved_model / .pb output complete!
saved_model_cli show --dir saved_model/ --all

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['input_1'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 224, 224, 3)
        name: input_1:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['Identity'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 2254, 12)
        name: Identity:0
    outputs['Identity_1'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 2254, 1)
        name: Identity_1:0
  Method name is: tensorflow/serving/predict
zhenhao-huang commented 4 months ago

It's hard to pull down the Docker container.

PINTO0309 commented 4 months ago
python -m tf2onnx.convert \
--opset 11 \
--tflite pose_landmarks_detector.tflite \
--output pose_landmarks_detector.onnx \
--inputs-as-nchw input_1 \
--dequantize

onnxsim pose_landmarks_detector.onnx pose_landmarks_detector.onnx