espressif / esp-tflite-micro

TensorFlow Lite Micro for Espressif Chipsets
Apache License 2.0
392 stars 83 forks source link

why micro_speech is too small ? my model more than 10MB??? 😭 (TFMIC-12) #43

Open xxlxmd opened 1 year ago

xxlxmd commented 1 year ago

Hi, hello to everyone. I would like to know why micro_ speech model, model. h model. cc, is so small that it is only 120kb. After training with TensorFlow, the PB model also has 250kb converted to TensorFlow lite. After tflite, it becomes 5mb , and xxd after conversion, it exceeds 20mb, which cannot be used normally on ESP32. This makes me very distressed. I just want to replace the original one. Yes or no Can you disclose how to train the model and its various parameters I used either TensorFlow Lite Model Maker's voice model or a simple 20 word model, but they couldn't turn the final model. h model. cc into hundreds of kb. Thank you for the project that also troubled me

vikramdattu commented 1 year ago

Hi @xxlxmd after converting to tflite-micro the model size rises you mean? Please check if you have selected proper setup for quantization (int8 quantization). File size of > 4 times is misleading after conversion to .cc. You should check the array size instead. You may refer this for the quantization setup: https://github.com/tensorflow/tflite-micro/blob/main/tensorflow/lite/micro/examples/hello_world/quantization/ptq.py#L96

SaketNer commented 10 months ago

HI @xxlxmd you can try pruning the model to reduce the number of parameters. Also quantise the model to int8, I am assuming your model will be in the float32 format. Quantisation will reduce your model size by almost 4 times.