XiaoMi / mobile-ai-bench

Benchmarking Neural Network Inference on Mobile Devices
Apache License 2.0
362 stars 58 forks source link

TF.LITE GPU runtime error (Android Debug Bridge version 1.0.40) #16

Closed PennySHE closed 6 years ago

PennySHE commented 6 years ago

adbd is already running as root remount succeeded 710000000 Push third_party/tflite/tensorflow/contrib/lite/lib/arm64-v8a/libtensorflowLite.so to /data/local/tmp/aibench Equal checksum with output/inception_v3.tflite and /data/local/tmp/aibench/inception_v3.tflite Equal checksum with output/mobilenet_quant_v1_224.tflite and /data/local/tmp/aibench/mobilenet_quant_v1_224.tflite Equal checksum with output/mobilenet_v1_1.0_224.tflite and /data/local/tmp/aibench/mobilenet_v1_1.0_224.tflite Push bazel-bin/aibench/benchmark/model_benchmark to /data/local/tmp/aibench Run /data/local/tmp/aibench/model_benchmark ('TFLITE', 'CPU', 'MobileNetV1') WARNING: linker: Warning: "/data/local/tmp/aibench/model_benchmark" unused DT entry: DT_RPATH (type 0xf arg 0xea3) (ignoring) sleep 10 benchmarking: MobileNetV1,3,0 benchmark: MobileNetV1,3,0,8.184,52.478 ('TFLITE', 'CPU', 'MobileNetV2') WARNING: linker: Warning: "/data/local/tmp/aibench/model_benchmark" unused DT entry: DT_RPATH (type 0xf arg 0xea3) (ignoring) ('TFLITE', 'CPU', 'SqueezeNetV11') WARNING: linker: Warning: "/data/local/tmp/aibench/model_benchmark" unused DT entry: DT_RPATH (type 0xf arg 0xea3) (ignoring) ('TFLITE', 'CPU', 'InceptionV3') WARNING: linker: Warning: "/data/local/tmp/aibench/model_benchmark" unused DT entry: DT_RPATH (type 0xf arg 0xea3) (ignoring) sleep 10 benchmarking: InceptionV3,3,0 benchmark: InceptionV3,3,0,9.005,491.866 ('TFLITE', 'CPU', 'VGG16') WARNING: linker: Warning: "/data/local/tmp/aibench/model_benchmark" unused DT entry: DT_RPATH (type 0xf arg 0xea3) (ignoring) ('TFLITE', 'GPU', 'MobileNetV1') WARNING: linker: Warning: "/data/local/tmp/aibench/model_benchmark" unused DT entry: DT_RPATH (type 0xf arg 0xea3) (ignoring) ('TFLITE', 'GPU', 'MobileNetV2') WARNING: linker: Warning: "/data/local/tmp/aibench/model_benchmark" unused DT entry: DT_RPATH (type 0xf arg 0xea3) (ignoring) ('TFLITE', 'GPU', 'SqueezeNetV11') WARNING: linker: Warning: "/data/local/tmp/aibench/model_benchmark" unused DT entry: DT_RPATH (type 0xf arg 0xea3) (ignoring) ('TFLITE', 'GPU', 'InceptionV3') WARNING: linker: Warning: "/data/local/tmp/aibench/model_benchmark" unused DT entry: DT_RPATH (type 0xf arg 0xea3) (ignoring) ('TFLITE', 'GPU', 'VGG16') WARNING: linker: Warning: "/data/local/tmp/aibench/model_benchmark" unused DT entry: DT_RPATH (type 0xf arg 0xea3) (ignoring)

lee-bin commented 6 years ago

TFLITE does not support GPU.

PennySHE commented 6 years ago

@lee-bin , thanks so much for the comment. I have some questions on TFLITE.

  1. TFLITE non-quantized and quantized models. How does it determine to which kind of models to run which runtime(CPU/GPU/DSP/NPU)?
  2. TFLITE doesn't support GPU, then do you know when or which version it gonna support GPU?
lee-bin commented 6 years ago
  1. TFLITE only supports CPU, so it does not need to determine which one to use, it's always CPU.
  2. You can consult TensorFlow team.
PennySHE commented 6 years ago

@lee-bin , does mobile-ai-bench support androidNN API ?

lee-bin commented 6 years ago

No, and it's not for Mobile AI Bench to support.

PennySHE commented 6 years ago

thanks. Sorry for making the misunderstanding Let met change another words to ask my question. I have my android device NNAPI implemented and it has ML-HAL to enable hardware acceleration. In the current version of mobile-ai-bench meassurement, does it support "use AndriodNN" on benchmarking?

lydoc commented 6 years ago

TFLITE has the interface to set NNAPI: https://github.com/XiaoMi/mobile-ai-bench/blob/master/aibench/executors/tflite/tflite_executor.cc#L41 For MACE/NCNN/SNPE, it does not support Android NNAPI.

PennySHE commented 6 years ago

@lydoc @lee-bin , thanks so so much for all your answers