espressif / esp-dl

Espressif deep-learning library for AIoT applications
MIT License
549 stars 118 forks source link

运行tools/quantization_tool/examples/example.py,生成的mnist_model_example_optimized.onnx推理速度更慢而且所占用空间更大,请问是为什么? (AIV-608) #123

Open 1Yanxiaolin1 opened 1 year ago

1Yanxiaolin1 commented 1 year ago

打印出来的结果如下: accuracy of int8 model is: 0.977000 accuracy of fp32 model is: 0.977000 int8-model test time is 1.8347067832946777 float-model test time is 0.36646580696105957 Size of mnist_model_example_optimized.onnx: 439206 bytes Size of mnist_model_example.onnx: 439119 bytes 有没有高人指点一下这是为什么???感谢!!!