Closed SergioPN closed 3 years ago
Yes, the most of the model is mapped to CPU. As of now we don't have any work around solution for this and we will let you know if AUTO ML team is able to fix this.
Thanks for the quick response @hjonnala.
Unfortunately, coral team no longer supporting AutoML models.
Hi, I have trained several models in the past using the GCP Object Detection AutoML for Edge (Now called Vertex AI). I usually compiled those models with edgetpu_compiler version 14.1 since the version 15 won't work on them. Since the beginning of august I can't compile the models that I train as I normally do on Vertex AI. I have tried the last version of the compiler version 16 which allows me to compile the model with not so great results.
Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 12
Number of operations that will run on CPU: 287
As you can image this is far from optimal. But even if I try to run this model on the USB Coral TPU I get the following error.
PyCoral 1.0.1 Python 3.8 Windows
ValueError: Op builtin_code out of range: 130. Are you using old TFLite binary with newer model?Registration failed.
PyCoral 2.0.0 with tflite_runtime 2.5.0.post1 Python 3.8 Windows
ValueError: Didn't find op for builtin opcode 'BROADCAST_TO' version '1'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model? Registration failed.
The biggest difference I see on the models that Vertex AI outputs is that in past the runtime was 1.14.0 and now it's 2.5.0. I am assuming this is the problem.
Is there any fix for this issue?
Thanks