google-coral / edgetpu

Coral issue tracker (and legacy Edge TPU API source)
https://coral.ai
Apache License 2.0
426 stars 125 forks source link

ERROR: Didn't find op for builtin opcode 'FILL' version '3' when compile integer quantized mask_rcnn_inception model #434

Closed JiashuGuo closed 3 years ago

JiashuGuo commented 3 years ago

Does anyone know if the mask_rcnn model is able to be run on the coral edgu_tpu? I got an error when running the compiler on the quantized mask_rcnn TFLite model:

Edge TPU Compiler version 15.0.340273435
ERROR: Didn't find op for builtin opcode 'FILL' version '3'

ERROR: Registration failed.

Invalid model: model_int_quantized.tflite
Model could not be parsed
hjonnala commented 3 years ago

Please try with new Edge TPU Compiler version 16.0.384591198.

JiashuGuo commented 3 years ago

Please try with new Edge TPU Compiler version 16.0.384591198.

Got another error with version 16:

Edge TPU Compiler version 16.0.384591198 Started a compilation timeout timer of 180 seconds. ERROR: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors. Compilation failed: Model failed in Tflite interpreter. Please ensure model can be loaded/run in Tflite interpreter. Compilation child process completed within timeout period. Compilation failed!

hjonnala commented 3 years ago

have you tried to run the inference using tflite Interpreter?

Can you please share your tflite model?

JiashuGuo commented 3 years ago

Here is the link to tflite model

hjonnala commented 3 years ago

I am not able to run inference on Tflite Interpreter. Please make sure you should be able to make run inference on Tflite Interpreter before compiling to edgetpu.

Please check this link how to run inference on Tflite Interpreter: https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python

image

JiashuGuo commented 3 years ago

I am not able to run inference on Tflite Interpreter. Please make sure you should be able to make run inference on Tflite Interpreter before compiling to edgetpu.

Please check this link how to run inference on Tflite Interpreter: https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python

image

May I know what version of TensorFlow you are using?

I run the model with the interpreter and got the different errors: Traceback (most recent call last): File "converter.py", line 49, in <module> interpreter.invoke() File "/home/dev/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter.py", line 858, in invoke self._interpreter.Invoke() RuntimeError: Input tensor 1279 lacks data

And the way I start interpreter is:

interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
interpreter.invoke()
hjonnala commented 3 years ago

I am using tflite runtime 2.5.0post1 from here https://github.com/google-coral/pycoral/releases

import numpy as np
import tensorflow as tf

# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Test the model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)

interpreter.invoke()

# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
hjonnala commented 3 years ago

Feel free to reopen the issue if you are able to run the inference using tflite interpreter and not able to compile with edgetpu compiler.