google-ai-edge / LiteRT

LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
https://ai.google.dev/edge/litert
Apache License 2.0
169 stars 13 forks source link

The _get_tensor_details() method was crashed while trying to read a tflite model. #131

Open pkgoogle opened 3 days ago

pkgoogle commented 3 days ago

Original Issue: https://github.com/tensorflow/tensorflow/issues/56283 Opening on behalf of @fatcat-z

Click to expand! ### Issue Type Bug ### Source binary ### Tensorflow Version 2.9.1 (v2.9.0-18-gd8ce9f9c301) ### Custom Code No ### OS Platform and Distribution Ubuntu 18.04.5 LTS ### Mobile device Ubuntu 18.04.5 LTS ### Python version 3.8.5 ### Bazel version _No response_ ### GCC/Compiler version _No response_ ### CUDA/cuDNN version _No response_ ### GPU model and memory _No response_ ### Current Behaviour? ```shell A bug happened! After we construct an tf.lite.Interpreter() with the specified tflite model, we tried to call _get_tensor_details() method to get some tensor details. Just found out it will crash with some specified tflite models. Please use the code below to reproduce the issue. ``` ### Standalone code to reproduce the issue ```shell import os import tensorflow as tf from tf2onnx.tflite.Model import Model # face_detection_full_range_sparse.tflite could be downloaded from: https://github.com/google/mediapipe/tree/master/mediapipe/modules/face_detection/face_detection_full_range_sparse.tflite # tflite_path = os.path.join(os.path.dirname(__file__), "face_detection_full_range_sparse.tflite") # pose_detection.tflite could be downloaded from: https://github.com/google/mediapipe/blob/master/mediapipe/modules/pose_detection/pose_detection.tflite tflite_path = os.path.join(os.path.dirname(__file__), "pose_detection.tflite") with open(tflite_path, 'rb') as f: buf = f.read() buf = bytearray(buf) model = Model.GetRootAsModel(buf, 0) tensor_shapes = {} interpreter = tf.lite.Interpreter(tflite_path) interpreter.allocate_tensors() tensor_cnt = model.Subgraphs(0).TensorsLength() try: for i in range(tensor_cnt): name = model.Subgraphs(0).Tensors(i).Name().decode() details = interpreter._get_tensor_details(i) print("==== name: ", name) if "shape_signature" in details: tensor_shapes[name] = details["shape_signature"].tolist() elif "shape" in details: tensor_shapes[name] = details["shape"].tolist() except Exception as e: print("Error loading model into tflite interpreter: %s", e) print("End") ``` ### Relevant log output ```shell INFO: Created TensorFlow Lite XNNPACK delegate for CPU. ==== name: input_1 ==== name: model_1/model/zero_padding2d/Pad/paddings ==== name: model_1/model/zero_padding2d/Pad ==== name: model_1/model/batch_normalization/FusedBatchNormV3 ==== name: model_1/model/batch_normalization/FusedBatchNormV3_dequantize ==== name: model_1/model/conv2d/Conv2D ==== name: model_1/model/conv2d/Conv2D_dequantize ==== name: model_1/model/re_lu/Relu6;model_1/model/batch_normalization/FusedBatchNormV3;model_1/model/batch_normalization_1/FusedBatchNormV3;model_1/model/depthwise_conv2d/depthwise;model_1/model/regressor_person_16_NO_PRUNING/Conv2D;model_1/model/conv2d/Conv2D ==== name: model_1/model/batch_normalization_1/FusedBatchNormV3 ==== name: model_1/model/batch_normalization_1/FusedBatchNormV3_dequantize ==== name: model_1/model/batch_normalization_1/FusedBatchNormV3;model_1/model/depthwise_conv2d/depthwise;model_1/model/regressor_person_16_NO_PRUNING/Conv2D ==== name: model_1/model/batch_normalization_1/FusedBatchNormV3;model_1/model/depthwise_conv2d/depthwise;model_1/model/regressor_person_16_NO_PRUNING/Conv2D_dequantize ==== name: model_1/model/re_lu_1/Relu6;model_1/model/batch_normalization_1/FusedBatchNormV3;model_1/model/depthwise_conv2d/depthwise;model_1/model/regressor_person_16_NO_PRUNING/Conv2D ==== name: model_1/model/batch_normalization_2/FusedBatchNormV3;model_1/model/conv2d_1/Conv2D ==== name: model_1/model/batch_normalization_2/FusedBatchNormV3;model_1/model/conv2d_1/Conv2D_dequantize Segmentation fault (core dumped) ```
gaikwadrahul8 commented 3 days ago

This issue originally reported by @fatcat-z has been moved to this dedicated repository for LiteRT to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.

We appreciate your understanding and look forward to your continued involvement.