alibaba / lightweight-neural-architecture-search

This is a collection of our zero-cost NAS and efficient vision applications.
Apache License 2.0
383 stars 50 forks source link

Converting a mixed-precision quantization model for deployment on MCU #21

Open erectbranch opened 1 year ago

erectbranch commented 1 year ago

Thanks for this amazing repo. I'm currently working on training an efficient low-precision backbone and deploying it on an ARM Cortex-M7 MCU device with limited resources (512kB RAM, 2MB Flash). I believe I need to convert the mixed-precision quantization model to a tflite model to achieve this.

Could you please guide how to perform this conversion and deployment? Thanks.