Closed JiashuGuo closed 3 years ago
Please try with new Edge TPU Compiler version 16.0.384591198.
Please try with new Edge TPU Compiler version 16.0.384591198.
Got another error with version 16:
Edge TPU Compiler version 16.0.384591198 Started a compilation timeout timer of 180 seconds. ERROR: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors. Compilation failed: Model failed in Tflite interpreter. Please ensure model can be loaded/run in Tflite interpreter. Compilation child process completed within timeout period. Compilation failed!
have you tried to run the inference using tflite Interpreter?
Can you please share your tflite model?
I am not able to run inference on Tflite Interpreter. Please make sure you should be able to make run inference on Tflite Interpreter before compiling to edgetpu.
Please check this link how to run inference on Tflite Interpreter: https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python
I am not able to run inference on Tflite Interpreter. Please make sure you should be able to make run inference on Tflite Interpreter before compiling to edgetpu.
Please check this link how to run inference on Tflite Interpreter: https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python
May I know what version of TensorFlow you are using?
I run the model with the interpreter and got the different errors:
Traceback (most recent call last): File "converter.py", line 49, in <module> interpreter.invoke() File "/home/dev/.local/lib/python3.6/site-packages/tensorflow/lite/python/interpreter.py", line 858, in invoke self._interpreter.Invoke() RuntimeError: Input tensor 1279 lacks data
And the way I start interpreter is:
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
interpreter.invoke()
I am using tflite runtime 2.5.0post1 from here https://github.com/google-coral/pycoral/releases
import numpy as np
import tensorflow as tf
# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="converted_model.tflite")
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Test the model on random input data.
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
# The function `get_tensor()` returns a copy of the tensor data.
# Use `tensor()` in order to get a pointer to the tensor.
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
Feel free to reopen the issue if you are able to run the inference using tflite interpreter and not able to compile with edgetpu compiler.
Does anyone know if the mask_rcnn model is able to be run on the coral edgu_tpu? I got an error when running the compiler on the quantized mask_rcnn TFLite model:
tf vesion: 2.7.0-dev20210804
Pretrained mask_rcnn model is downloaded from: link
Here is the code to quantize and convert the model into TF Lite in Colab here