XiaoMi / mobile-ai-bench

Benchmarking Neural Network Inference on Mobile Devices
Apache License 2.0
353 stars 57 forks source link

Which is fix-point, quant. or original model in bench #15

Closed ysh329 closed 6 years ago

ysh329 commented 6 years ago

Thanks in advance. 🙇

I only found three quant. model below output directory as below:

root@cross-compile:/opt/mobile-ai-bench/output# ls
chairs_224x224.raw          inception_v3.tflite            mobilenet-v2.dlc          threads2
dog.npy                     keyboard_299x299.dat           mobilenet_v2.pb           threads4
inception_v3.data           libmace.zip                    prepare_report.csv        vgg16_caffe_gpu.data
inception_v3.dlc            mobilenet_quant_v1_224.tflite  run_report.csv            vgg16_caffe_gpu.pb
inception_v3_dsp.data       mobilenet_v1_1.0_224.tflite    squeezenet_v11.data       vgg16.data
inception_v3_dsp.pb         mobilenet_v1.data              squeezenet_v11.dlc        vgg16.dlc
inception_v3.param          mobilenet-v1.dlc               squeezenet_v11.pb         vgg16.pb
inception_v3.pb             mobilenet_v1.pb                tensorflow-1.9.0-rc1.zip  vgg16_quantized.dlc
inception_v3_quantized.dlc  mobilenet_v2.data              threads1
root@cross-compile:/opt/mobile-ai-bench/output# ls *quant*
inception_v3_quantized.dlc  mobilenet_quant_v1_224.tflite  vgg16_quantized.dlc
root@cross-compile:/opt/mobile-ai-bench/output#
lee-bin commented 6 years ago

inception_v3_quantized.dlc and vgg16_quantized.dlc are for SNPE to run on DSP. mobilenet_quant_v1_224.tflite is for TFLITE to run on CPU, but it's not used right now.

ysh329 commented 6 years ago

It seems that benchmark result of TFLITE is quant. or fix-pointed?

4 threads

model_name device_name soc abi runtime MACE SNPE NCNN TF
InceptionV3 Mi Note 3 sdm660 arm64-v8a CPU 582.939   1625.255 820.591
InceptionV3 Mi Note 3 sdm660 armeabi-v7a CPU 776.011 639.447 1842.057 912.057
MobileNetV1 Mi Note 3 sdm660 arm64-v8a CPU 49.202   63.042 92.208
MobileNetV1 Mi Note 3 sdm660 armeabi-v7a CPU 62.663 416.158 71.467 103.255

If so, how can I turn off TFLITE's quant. or fix-pointed. I want to benchmark with all frameworks with fp32, no fix-pointed, quanti. enabled.

lee-bin commented 6 years ago

Quant for TFLITE is not used, so the benchmark results are all fp32.

ysh329 commented 6 years ago

@lee-bin Thanks