espressif / esp-dl

Espressif deep-learning library for AIoT applications
MIT License
516 stars 115 forks source link

the model precision drop from 90% to 51% when quantize from float32 to int8. (AIV-559) #105

Open cab1211 opened 1 year ago

cab1211 commented 1 year ago

the model precision drop from 90% to 51% when quantize from float32 to int8, but didn't drop with the int16.