Closed lsq314 closed 4 years ago
Plus, I could run the inference using the "edgetpu api" well.
@lsq314 have you upgraded your libedgetpu runtime to version 13?
Ahh, can you also upgrade libedgetpu1-std
also?
@Namburger I find that no. I will try this.
@Namburger Thank you very much. It's been figured out.
Hi all,
I am using the Coral Dev Board with the Mendel OS 4.0 (Day) version and tflite_runtime version 2.1.0. However, when I run the following example (officially provided)
python3 classify_image.py --model models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --labels models/inat_bird_labels.txt --input images/parrot.jpg
I get the following error message.Traceback (most recent call last): File "classify_image.py", line 122, in <module> main() File "classify_image.py", line 100, in main interpreter.allocate_tensors() File "/home/mendel/.local/lib/python3.7/site-packages/tflite_runtime/interpreter.py", line 242, in allocate_tensors return self._interpreter.AllocateTensors() File "/home/mendel/.local/lib/python3.7/site-packages/tflite_runtime/interpreter_wrapper.py", line 115, in AllocateTensors return _interpreter_wrapper.InterpreterWrapper_AllocateTensors(self) RuntimeError: Internal: Unsupported data type: 0Node number 1 (EdgeTpuDelegateForCustomOp) failed to prepare.
Meanwhile, when I run the detection's example. The error is similar. Could someone please figure this out?