XiaoMi / mobile-ai-bench

Benchmarking Neural Network Inference on Mobile Devices
Apache License 2.0
362 stars 58 forks source link

Bench doesn't support SNPE's armv8? #14

Closed ysh329 closed 5 years ago

ysh329 commented 6 years ago

I found only armv7, but without armv8a benchmark result of SNPE.

benchmark (#94358260) · Jobs · Liangliang He / mobile-ai-bench · GitLab

However, I found armv8a settings in SNPE's docs as below: image

Snapdragon Neural Processing Engine SDK: SNPE Setup
https://developer.qualcomm.com/docs/snpe/setup.html

lee-bin commented 6 years ago

You can pull the latest code, arm64-v8a of SNPE is supported now.

ysh329 commented 6 years ago

@lee-bin Thanks bin. I have a new problem :missing input file '@snpe//:lib/aarch64-android-gcc4.9/libgnustl_shared.so'

Error when execute command below

 python tools/benchmark.py \                                                                                      
   --output_dir=output \                                                                                          
   --frameworks=MACE,SNPE,TFLITE,NCNN \                                                                           
   --runtimes=CPU \                                                                                               
   --target_abis=armeabi-v7a,arm64-v8a \                                                                          
   --num_threads=4

Error log as below:

benchmarking: VGG16,2,0                                                                                   [9/1965]
benchmark: VGG16,2,0,915.926,761.494
Prepare to run models on arm64-v8a
* Build //aibench/benchmark:model_benchmark with ABI arm64-v8a
INFO: Analysed target //aibench/benchmark:model_benchmark (21 packages loaded).
INFO: Found 1 target...
ERROR: missing input file '@snpe//:lib/aarch64-android-gcc4.9/libgnustl_shared.so'
ERROR: /root/.cache/bazel/_bazel_root/cbdcfbd4f6dd765900be10977b9f0f82/external/snpe/BUILD.bazel:29:1: @snpe//:snp
e_arm64-v8a: missing input file '@snpe//:lib/aarch64-android-gcc4.9/libgnustl_shared.so'
Target //aibench/benchmark:model_benchmark failed to build
ERROR: /root/.cache/bazel/_bazel_root/cbdcfbd4f6dd765900be10977b9f0f82/external/snpe/BUILD.bazel:29:1 1 input file
(s) do not exist
INFO: Elapsed time: 61.799s, Critical Path: 1.66s
INFO: 5 processes, local.
FAILED: Build did NOT complete successfully
Traceback (most recent call last):
  File "tools/benchmark.py", line 225, in <module>
    main(unused_args=[sys.argv[0]] + unparsed)
  File "tools/benchmark.py", line 208, in main
    runtimes)
  File "/opt/mobile-ai-bench/tools/sh_commands.py", line 210, in bazel_build
    *bazel_args)
  File "/usr/local/lib/python2.7/dist-packages/sh.py", line 1413, in __call__
    raise exc
sh.ErrorReturnCode_1: 

  RAN: /usr/local/bin/bazel build //aibench/benchmark:model_benchmark --config android --cpu=arm64-v8a --action_en
v=ANDROID_NDK_HOME=/opt/android-ndk-r15c --define mace=true --define snpe=true --define tflite=true --define ncnn=
true

  STDOUT:

  STDERR:
lee-bin commented 6 years ago

You should read the README.md and copy the corresponding libgnustl_shared.so to your SNPE path.

ysh329 commented 6 years ago

@lee-bin Thanks. I found SNPE doesn't support multi-threads in Snapdragon Neural Processing Engine SDK: Benchmarking.

Besides, I wanna ask if bench set the benchmark operating mode to sustained_high_performance or high_performance?

lee-bin commented 6 years ago

As we have tried, SNPE does use multi-threads. It's just that we can't set how many. We do not set sustained_high_performance or high_performance. You can set around here if you want. https://github.com/XiaoMi/mobile-ai-bench/blob/4a486e67f1a1a2847e84795d441ddbd766f8f83e/aibench/executors/snpe/snpe_executor.cc#L55

ysh329 commented 6 years ago

@lee-bin Thanks! 🙇

ysh329 commented 5 years ago

@lee-bin hi, bin!

I found some parameters about performance in SNPE docs as below:

   -s SLEEP, --sleep SLEEP
                        Set number of seconds to sleep between runs e.g. 20
                        seconds
  -b USERBUFFER_MODE, --userbuffer_mode USERBUFFER_MODE
                        [EXPERIMENTAL] Enable user buffer mode, default to
                        float, can be tf8exact0
  -p PERFPROFILE,     --perfprofile PERFPROFILE
                        Set the benchmark operating mode (system_settings, power_saver, balanced,
                        default, high_performance, sustained_high_performance, burst)
  -l PROFILINGLEVEL,  --profilinglevel PROFILINGLEVEL
                        Set the profiling level mode (off, basic, detailed). Default is basic.
                        Basic profiling only applies to DSP runtime.

I wanna ask:

  1. What's the userbuffer_mode, do you have any idea?
  2. Why there is a parameter named sleep, does a sleep break for benchmark have better performance?
lee-bin commented 5 years ago
  1. If you want to ask a different question, maybe you should open a new issue instead of reopening a closed one.

  2. You are asking a question about the SNPE benchmarking tool which is not used in this repo, maybe you can find more help from https://developer.qualcomm.com/forums/software/qualcomm-neural-processing-sdk or https://stackoverflow.com/questions/tagged/snpe.

  3. I think userbuffer_mode is using a user-supplied buffer which reduces copy overhead and sleep is for cooling down the device to get a stable benchmark result.

ysh329 commented 5 years ago

@lee-bin Thanks bin