Closed ysh329 closed 6 years ago
inception_v3_quantized.dlc and vgg16_quantized.dlc are for SNPE to run on DSP. mobilenet_quant_v1_224.tflite is for TFLITE to run on CPU, but it's not used right now.
It seems that benchmark result of TFLITE is quant. or fix-pointed?
4 threads
model_name | device_name | soc | abi | runtime | MACE | SNPE | NCNN | TF |
---|---|---|---|---|---|---|---|---|
InceptionV3 | Mi Note 3 | sdm660 | arm64-v8a | CPU | 582.939 | Â | 1625.255 | 820.591 |
InceptionV3 | Mi Note 3 | sdm660 | armeabi-v7a | CPU | 776.011 | 639.447 | 1842.057 | 912.057 |
MobileNetV1 | Mi Note 3 | sdm660 | arm64-v8a | CPU | 49.202 | Â | 63.042 | 92.208 |
MobileNetV1 | Mi Note 3 | sdm660 | armeabi-v7a | CPU | 62.663 | 416.158 | 71.467 | 103.255 |
If so, how can I turn off TFLITE's quant. or fix-pointed. I want to benchmark with all frameworks with fp32, no fix-pointed, quanti. enabled.
Quant for TFLITE is not used, so the benchmark results are all fp32.
@lee-bin Thanks
Thanks in advance. 🙇
I only found three quant. model below output directory as below: