Closed busyyang closed 3 years ago
@busyyang The very first operator 'Conv2D' itself is working on float32 and cannot be mapped to the TPU. Please perform the quantization on this model and then try to recompile. For more details on quantization please see : https://coral.ai/docs/edgetpu/models-intro/#quantization
@busyyang are you able to quantize the model and compile it with edgetpu compiler?
@busyyang are you able to quantize the model and compile it with edgetpu compiler?
@hjonnala Thanks, I did quantize the model by tf.Lite.Converter and compile it with web-base edgetpu compiler. It works good right now.
Well, may I have another question? I fond that there are postprocess step if I download the trained model from https://coral.ai/models/. In my custom project, how can I add the postprocess into the _edgetpu.tflite instead of do it by post-process code?
Hi, There are two ways to create the models that are compatible with Edgeptu.
You can use quantization-aware models to avoid post processing.
Please refer to these tutorials for examples.
Thanks, it help me a lot.
I am trying to run a MTCNN model on edge TPU. I have
.h5
models and convert to CPU .tflite model with python API:Then I have the .tflite models. Further more, I use the edge tpu compiler from Google Colab to convert CPU .tflite model to TPU .tflite model.
I convert it successfully and _edgetpu.tflite model generated. But there is no parameter On-chip and all operations are on CPU not Edge TPU. There are just normal operations like
Conv2D
,MaxPool2D
and so on. They should be processed on Edge TPU in my thought.I have upload all the model files on drive