ubsuny / g2-coral

MIT License
0 stars 1 forks source link

quantization for compiling #32

Open CarlatBuffalo opened 3 years ago

CarlatBuffalo commented 3 years ago

The developed model needs to be quantizated before compling on the edge TPU or coral

CarlatBuffalo commented 3 years ago

This is an important step. 图片

To pass the model to a Google ML device, it (a keras model for my case) needs to be converted into TF lite model so that it's able to be complied to a hardware-compatible model. However, if your model contains operations or floating data types which are not compatible with the compiler, quantization becomes necessary before compling. There're two types of quantization methods, quantization-aware traing and post-training quantization. The first one generates psudeo interger nodes before the model training and has better accuracy though it's recommended only for tf v1. While the latter method is implemented after the training process and needs additional datasets for quantization to not let it to impair the model. Be aware that some tf lite operations can't not quantized by the post-training method. see https://coral.ai/docs/edgetpu/models-intro/#quantization for details