tensorflow / tensorflow

An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
Apache License 2.0
186.02k stars 74.25k forks source link

TFLite ERROR: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors. #48199

Closed yann-pourcenoux closed 3 years ago

yann-pourcenoux commented 3 years ago

System information

Describe the current behavior When I am running the TFLite benchmark on my phone with my model I get the following error ERROR: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors. when I try to run on GPU although every tensor are in static size.

Describe the expected behavior I took care of having static-sized tensors everywhere so I expect to be able to run the model fully on GPU.

Standalone code to reproduce the issue The network can be downloaded [removed].

I run the benchmark which can be downloaded from Tensorflow here. I run it with the following commands

adb push android_arm_benchmark_model /data/local/tmp/benchmark
adb shell chmod +x /data/local/tmp/benchmark
adb push model.tflite /data/local/tmp/model.tflite
adb shell "/data/local/tmp/benchmark"  --graph="/data/local/tmp/model.tflite" --input_layer=input --input_layer_shape=1,360,640,3 --use_gpu=true

Regarding conversion, I am converting my model using this script:

model = load_model() # I would rather not share this

# Create dataset
def get_func():
    return lambda obj: func(obj)

def func(obj, y=640, x=360):
    image = obj["image"]
    shape = tf.shape(image)
    height, width = shape[0], shape[1]
    ratio_y = y / height
    ratio_x = x / width
    image = tf.image.resize(image, (y, x))
    scale = tf.convert_to_tensor([ratio_y, ratio_x, ratio_y, ratio_x], dtype=tf.float32)
    return {"Input": image, "Scale_Input": scale}

dataset = tfds.load(
    name="coco/2017",
    split="train",
)
dataset = dataset.map(get_func())

# Transform the dataset into a representative dataset as in the TF guide
def representative_dataset_generator():
    for input_value in dataset.batch(1).take(10):
        yield [input_value["Input"], input_value["Scale_Input"]]

# Converter
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]

# Set the representative dataset in order to quantize the activations
converter.representative_dataset = representative_data_gen

# Ensure that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.target_spec.supported_types = [tf.int8]

# Set the input and output tensors to uint8 (APIs added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8

# Additional tricks
converter.experimental_new_converter = True
converter.experimental_new_quantizer = True
converter.target_spec.supported_ops = [
    tf.lite.OpsSet.TFLITE_BUILTINS,  # enable TensorFlow Lite ops.
]

tf_lite_quant_model = converter.convert()

# saving converted model in TFLite file
with open("model.tflite", "wb") as tf_file:
    tf_file.write(tf_lite_quant_model)
impjdi commented 3 years ago

Tensor index can be easily verified if you add a print statement in the HasDynamicTensorImpl function...

yann-pourcenoux commented 3 years ago

I am sorry but I do not find any documentation about this function. Could you please tell me where it is defined/how can I use it?

yann-pourcenoux commented 3 years ago

The problem was that I indeed had a "hidden" dynamic-sized tensor. Changing from:

indices = tf.where(cond)
count = tf.shape(indices)[0]

to:

count = tf.math.reduce_sum(tf.where(cond, 1, 0))

fixed the problem.

google-ml-butler[bot] commented 3 years ago

Are you satisfied with the resolution of your issue? Yes No