XiaoMi / mobile-ai-bench

Benchmarking Neural Network Inference on Mobile Devices
Apache License 2.0
362 stars 58 forks source link

ImportError: No module named jinja2 #38

Closed WangFengtu1996 closed 4 years ago

WangFengtu1996 commented 4 years ago

HI, all

I want to run the benchmarking tool on my device through ssh, when I run the

bash tools/benchmark.sh --benchmark_option=Performance \
                        --target_abis=aarch64

Then , I need to enter my password again and again.

But in the end I get the error Selection_005

I use anaconda to create python virtualenv , and my installed python package Selection_006 And, I am sure that I get the lastest version of Jinja2.

Thanks for your help.

WangFengtu1996 commented 4 years ago

Is there a requestion about the version of Jinja2??

lee-bin commented 4 years ago

You can try bazel clean --expunge and run again.

WangFengtu1996 commented 4 years ago

Thanks for your time .But there is other error when I run the command

bash tools/benchmark.sh --benchmark_option=Performance --executors=MACE  --target_abis=aarch64

After enter the password to terminal again and again. I got that.

bash: /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor: No such file or directory
bash: /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor: No such file or directory
bash: /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor: No such file or directory
bash: /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor: No such file or directory
bash: /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor: No such file or directory
bash: /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor: No such file or directory
bash: /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor: No such file or directory
bash: /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor: No such file or directory
bash: /sys/class/devfreq/1d84000.ufshc/governor: No such file or directory
 bash: /sys/class/devfreq/5000000.qcom,kgsl-3d0/governor: No such file or directory
bash: /sys/class/devfreq/aa00000.qcom,vidc:arm9_bus_ddr/governor: No such file or directory
bash: /sys/class/devfreq/aa00000.qcom,vidc:bus_cnoc/governor: No such file or directory
bash: /sys/class/devfreq/aa00000.qcom,vidc:venus_bus_ddr/governor: No such file or directory
bash: /sys/class/devfreq/aa00000.qcom,vidc:venus_bus_llcc/governor: No such file or directory
bash: /sys/class/devfreq/soc:qcom,cpubw/governor: No such file or directory
 bash: /sys/class/devfreq/soc:qcom,gpubw/governor: No such file or directory
bash: /sys/class/devfreq/soc:qcom,kgsl-busmon/governor: No such file or directory
bash: /sys/class/devfreq/soc:qcom,l3-cdsp/governor: No such file or directory
bash: /sys/class/devfreq/soc:qcom,l3-cpu0/governor: No such file or directory
bash: /sys/class/devfreq/soc:qcom,l3-cpu4/governor: No such file or directory
bash: /sys/class/devfreq/soc:qcom,llccbw/governor: No such file or directory
bash: /sys/class/devfreq/soc:qcom,memlat-cpu0/governor: No such file or directory
bash: /sys/class/devfreq/soc:qcom,memlat-cpu4/governor: No such file or directory
bash: /sys/class/devfreq/soc:qcom,mincpubw/governor: No such file or directory
bash: /sys/class/devfreq/soc:qcom,snoc_cnoc_keepalive/governor: No such file or directory
bash: /sys/class/kgsl/kgsl-3d0/min_pwrlevel: No such file or directory
bash: /sys/class/kgsl/kgsl-3d0/max_pwrlevel: No such file or directory
bash: /sys/class/kgsl/kgsl-3d0/devfreq/governor: No such file or directory
cat: /sys/class/kgsl/kgsl-3d0/gpuclk: No such file or directory
bash: /sys/class/kgsl/kgsl-3d0/idle_timer: No such file or directory
bash: /d/dri/0/debug/core_perf/perf_mode: No such file or directory
bash: /sys/devices/system/cpu/cpu0/core_ctl/min_cpus: No such file or directory
bash: /sys/devices/system/cpu/cpu4/core_ctl/min_cpus: No such file or directory
bash: /proc/sys/kernel/sched_downmigrate: No such file or directory
bash: /sys/block/sda/queue/nr_requests: No such file or directory
bash: /dev/stune/top-app/schedtune.boost: No such file or directory
bash: /dev/stune/top-app/schedtune.prefer_idle: No such file or directory
bash: /sys/module/lpm_levels/parameters/sleep_disabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu0/pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu0/rail-pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu1/pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu1/rail-pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu2/pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu2/rail-pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu3/pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu3/rail-pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu4/pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu4/rail-pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu5/pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu5/rail-pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu6/pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu6/rail-pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu7/pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/cpu7/rail-pc/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/l3-wfi/idle_enabled: No such file or directory
bash: /sys/module/lpm_levels/L3/llcc-off/idle_enabled: No such file or directory

And in the end, the error is Selection_007

Thanks for your time.

lee-bin commented 4 years ago
  1. If you want to login without password, you can copy your public key to an embedded device with the command below.
    cat ~/.ssh/id_rsa.pub | ssh -q {user}@{ip} "cat >> ~/.ssh/authorized_keys"
  2. As for the last error, you can check if you can run this command on your device.
    cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq

    It's for tf lite to run on the big cores, if you don't need that, just delete those lines.

WangFengtu1996 commented 4 years ago

First,thanks for your time.

When I run the command bettom in my embedded device

cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq

I get the message

cat: /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq: No such file or directory

So, which file shold I find to delete the lines to avoid that error.

lee-bin commented 4 years ago

Just the file that raise the exception, which is aibench/python/bench_engine.py.

WangFengtu1996 commented 4 years ago

Hi, I just change the file aibench/python/bench_engine.py

ef bench_run(abi,
              device,
              host_bin_path,
              bin_name,
              benchmark_option,
              input_dir,
              run_interval,
              num_threads,
              max_time_per_lock,
              benchmark_list,
              executor,
              device_types,
              device_bin_path,
              output_dir,
              dest_path,
              product_model
              ):
    i = 0
    while i < len(benchmark_list):
        print(
            "============================================================="
        )
        print("Trying to lock device %s" % device.address)
        with device_lock(device.address):
            start_time = time.time()
            print("Run on device: %s, %s, %s" %
                  (device.address, product_model, device.target_soc))
            try:
                sh.bash("tools/power.sh", device.address,
                        device.get_shell_prefix(),
                        device.target_soc, _fg=True)
            except Exception as e:
                print("Config power exception %s" % str(e))

            device.exec_command("mkdir -p %s" % device_bin_path)
            device.exec_command("rm -rf %s" %
                                os.path.join(device_bin_path,
                                             "interior"))
            device.exec_command("mkdir %s" %
                                os.path.join(device_bin_path,
                                             "interior"))
            device.exec_command("rm -rf %s" %
                                os.path.join(device_bin_path,
                                             "result.txt"))

            prepare_device_env(device, abi, device_bin_path, executor)

            if benchmark_option == base_pb2.Precision:
                push_precision_files(device, device_bin_path, input_dir)

            host_bin_full_path = "%s/%s" % (host_bin_path, bin_name)
            device_bin_full_path = "%s/%s" % (device_bin_path, bin_name)
            device.exec_command("rm -rf %s" % device_bin_full_path)
            device.push(host_bin_full_path, device_bin_path)
            print("Run %s" % device_bin_full_path)

##            cpu_mask, big_core_num = get_cpu_mask(device)
##            num_threads = min(big_core_num, num_threads)

            # TODO(luxuhui@xiaomi.com): opt the LIBRARY_PATH.
            cmd = "cd %s; ADSP_LIBRARY_PATH='.;/system/lib/rfsa/adsp;" \
                  "/system/vendor/lib/rfsa/adsp;/dsp';" \
                  " LD_LIBRARY_PATH=." % device_bin_path
##            cmd_tflite = cmd + " taskset " + cpu_mask + " ./model_benchmark"
            cmd = cmd + " ./model_benchmark"

            elapse_minutes = 0  # run at least one model
            while elapse_minutes < max_time_per_lock \
                    and i < len(benchmark_list):
                item = benchmark_list[i]
                i += 1
                if item[AIBenchKeyword.executor] != executor or \
                        item[AIBenchKeyword.device_type] not in device_types:
                    continue
                print(
                    base_pb2.ExecutorType.Name(
                        item[AIBenchKeyword.executor]),
                    base_pb2.ModelName.Name(
                        item[AIBenchKeyword.model_name]),
                    base_pb2.DeviceType.Name(
                        item[AIBenchKeyword.device_type]),
                    "Quantized" if item[AIBenchKeyword.quantize]
                    else "Float")
                args = [
                    "--run_interval=%d" % run_interval,
                    "--num_threads=%d " % num_threads,
                    "--benchmark_option=%s" % benchmark_option,
                    "--executor=%d" % item[AIBenchKeyword.executor],
                    "--device_type=%d" % item[AIBenchKeyword.device_type],
                    "--model_name=%d" % item[AIBenchKeyword.model_name],
                    "--quantize=%s" % item[AIBenchKeyword.quantize],
                    ]
                args = ' '.join(args)
 ##               cmd_run = cmd_tflite if item[AIBenchKeyword.executor] \
 ##                   == base_pb2.TFLITE else cmd
                cmd_run = cmd
                device.exec_command("%s %s" % (cmd_run, args), _fg=True)
                elapse_minutes = (time.time() - start_time) / 60
            print("Elapse time: %f minutes." % elapse_minutes)
            src_path = os.path.join(device_bin_path, "result.txt")
            tmp_path = os.path.join(output_dir, device.address + "_result.txt")
            device.pull(src_path, tmp_path)
            with open(tmp_path, "r") as tmp, open(dest_path, "a") as dest:
                dest.write(tmp.read())
        # Sleep awhile so that other pipelines can get the device lock.
        time.sleep(run_interval)

Then I run the commandbash tools/benchmark.sh --benchmark_option=Performance --executors=MACE --target_abis=aarch64 I can get the result ,but I have some question about the result. run_report.html Selection_003 prepare_report.html Selection_004

  1. I think the time of run_report should lesser than the time of prepare_report ??
  2. The Effect of the above file is not found??
  3. When I run bash tools/benchmark.sh --benchmark_option=Performance --executors=MACE --device_types=GPU --target_abis=aarch64, I get the meeasge

    
    Elapse time: 0.000000 minutes.
    Error msg 
    
    RAN: /usr/bin/scp -r HwHiAiUser@10.0.20.122:/home/HwHiAiUser/tmp/aibench/result.txt output/10.0.20.122_result.txt
    STDOUT:
    STDERR:
    scp: /home/HwHiAiUser/tmp/aibench/result.txt: No such file or directory

Traceback (most recent call last): File "/home/ubuntu/workspace/benchmark_tool/source_code/mobile-ai-bench/bazel-bin/aibench/python/benchmark.runfiles/aibench/aibench/python/benchmark.py", line 354, in main(unused_args=[sys.argv[0]] + unparsed) File "/home/ubuntu/workspace/benchmark_tool/source_code/mobile-ai-bench/bazel-bin/aibench/python/benchmark.runfiles/aibench/aibench/python/benchmark.py", line 343, in main benchmark_option, benchmark_list, result_files,) File "/home/ubuntu/workspace/benchmark_tool/source_code/mobile-ai-bench/bazel-bin/aibench/python/benchmark.runfiles/aibench/aibench/python/benchmark.py", line 267, in run_on_device FLAGS.output_dir, result_path, product_model) File "/home/ubuntu/workspace/benchmark_tool/source_code/mobile-ai-bench/aibench/python/bench_engine.py", line 435, in bench_run with open(tmp_path, "r") as tmp, open(dest_path, "a") as dest: IOError: [Errno 2] No such file or directory: 'output/10.0.20.122_result.txt'



***Thanks for your help.***
lee-bin commented 4 years ago
  1. It does not make sense to compare the time of Prepare and Run. You can refer to README to learn about the meaning of the time.
  2. It is only used for adjusting some parameters that some devices support to accelerate the inference. It is not universal, so if your device does not support that, it's fine.
  3. Is this the full log? It means the benchmark does not generate any results.
WangFengtu1996 commented 4 years ago

First, thks.

  1. When I run the commandbash tools/benchmark.sh --benchmark_option=Performance --target_abis=aarch64 --executors=TFLITE I get the error, and what should I do to fix the bug.

    
    INFO: Found 1 target...
    Target //aibench/python:benchmark up-to-date:
    bazel-bin/aibench/python/benchmark
    INFO: Elapsed time: 0.624s, Critical Path: 0.01s
    Find ssh device:atlas200
    Prepare to run models on aarch64
    Equal checksum with output/benchmark.pb and /home/HwHiAiUser/tmp/aibench/benchmark.pb
    Equal checksum with output/model.pb and /home/HwHiAiUser/tmp/aibench/model.pb
    Equal checksum with output/mobilenet_v1_1.0_224.tflite and /home/HwHiAiUser/tmp/aibench/mobilenet_v1_1.0_224.tflite
    Equal checksum with output/mobilenet_v2_1.0_224.tflite and /home/HwHiAiUser/tmp/aibench/mobilenet_v2_1.0_224.tflite
    Equal checksum with output/inception_v3.tflite and /home/HwHiAiUser/tmp/aibench/inception_v3.tflite
    Equal checksum with output/mobilenet_quant_v1_224.tflite and /home/HwHiAiUser/tmp/aibench/mobilenet_quant_v1_224.tflite
    Equal checksum with output/mobilenet_v2_1.0_224_quant.tflite and /home/HwHiAiUser/tmp/aibench/mobilenet_v2_1.0_224_quant.tflite
    Equal checksum with output/inception_v3_quant.tflite and /home/HwHiAiUser/tmp/aibench/inception_v3_quant.tflite
    * Build //aibench/benchmark:model_benchmark for TFLITE with ABI aarch64
    INFO: Found 1 target...
    ERROR: /home/ubuntu/workspace/benchmark_tool/source_code/mobile-ai-bench/aibench/benchmark/BUILD:52:1: C++ compilation of rule '//aibench/benchmark:model_benchmark' failed (Exit 1): aarch64-linux-gnu-gcc failed: error executing command 
    (cd /home/ubuntu/.cache/bazel/_bazel_wang/486fc9acf5761970e8f8cbc79af240fd/execroot/aibench && \
    exec env - \
    ANDROID_NDK_HOME=/home/ubuntu/Downloads/DeepLearning/android-ndk-r17c \
    LD_LIBRARY_PATH=/home/ubuntu/workspace/test_ws/devel/lib:/home/ubuntu/workspace/ros/devel/lib:/opt/ros/kinetic/lib:/home/ubuntu/workspace/awaken_project/demo/libs/x64/ \
    PATH=/home/ubuntu/Downloads/DeepLearning/android-ndk-r17c:/home/ubuntu/.cargo/bin:/opt/ros/kinetic/bin:/home/ubuntu/Downloads/DeepLearning/android-ndk-r17c:/home/ubuntu/.cargo/bin:/home/ubuntu/anaconda3/envs/mobile-ai-bench_env/bin:/home/ubuntu/.cargo/bin:/home/ubuntu/anaconda3/condabin:/home/ubuntu/.cargo/bin:/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/gcc/gcc-arm-none-eabi-7-2017-q4-major/bin:/home/ubuntu/Downloads/DeepLearning/android-ndk-r17c:/home/ubuntu/Downloads/DeepLearning/android-ndk-r17c:/home/ubuntu/Downloads/DeepLearning/android-ndk-r17c \
    PWD=/proc/self/cwd \
    tools/aarch64_compiler/linaro_linux_gcc/aarch64-linux-gnu-gcc '--sysroot=external/gcc_linaro_7_3_1_aarch64_linux_gnu/aarch64-linux-gnu/libc' -U_FORTIFY_SOURCE -fstack-protector -fPIE '-fdiagnostics-color=always' -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 -DNDEBUG -ffunction-sections -fdata-sections '-std=c++11' -fPIC -D_GLIBCXX_USE_C99_MATH_TR1 -DMACE_OBFUSCATE_LITERALS -DGEMMLOWP_USE_OPENMP -DMACE_USE_NNLIB_CAF -ffast-math -Ofast -O3 '-fvisibility=hidden' -ffunction-sections -fdata-sections '-std=c++11' -fPIC -D_GLIBCXX_USE_C99_MATH_TR1 -DMACE_OBFUSCATE_LITERALS -DGEMMLOWP_USE_OPENMP -DMACE_USE_NNLIB_CAF -ffast-math -Ofast -O3 '-fvisibility=hidden' -ffunction-sections -fdata-sections -Wno-ignored-attributes -Wno-unused-function -Wno-sequence-point -Wno-implicit-fallthrough -Wno-ignored-attributes -Wno-unused-function -Wno-sequence-point -Wno-implicit-fallthrough -isystem external/gcc_linaro_7_3_1_aarch64_linux_gnu/aarch64-linux-gnu/include/c++/7.3.1/aarch64-linux-gnu -isystem external/gcc_linaro_7_3_1_aarch64_linux_gnu/aarch64-linux-gnu/include/c++/7.3.1 -isystem external/gcc_linaro_7_3_1_aarch64_linux_gnu/include/c++/7.3.1/aarch64-linux-gnu -isystem external/gcc_linaro_7_3_1_aarch64_linux_gnu/include/c++/7.3.1 -MD -MF bazel-out/aarch64-linux-gnu-opt/bin/aibench/benchmark/_objs/model_benchmark/aibench/benchmark/benchmark_main.d '-frandom-seed=bazel-out/aarch64-linux-gnu-opt/bin/aibench/benchmark/_objs/model_benchmark/aibench/benchmark/benchmark_main.o' -iquote . -iquote bazel-out/aarch64-linux-gnu-opt/genfiles -iquote external/com_google_protobuf -iquote bazel-out/aarch64-linux-gnu-opt/genfiles/external/com_google_protobuf -iquote external/bazel_tools -iquote bazel-out/aarch64-linux-gnu-opt/genfiles/external/bazel_tools -iquote external/mace -iquote bazel-out/aarch64-linux-gnu-opt/genfiles/external/mace -iquote external/opencv -iquote bazel-out/aarch64-linux-gnu-opt/genfiles/external/opencv -iquote external/com_github_gflags_gflags -iquote bazel-out/aarch64-linux-gnu-opt/genfiles/external/com_github_gflags_gflags -isystem external/com_google_protobuf/src -isystem bazel-out/aarch64-linux-gnu-opt/genfiles/external/com_google_protobuf/src -isystem external/bazel_tools/tools/cpp/gcc3 -isystem external/opencv/include -isystem bazel-out/aarch64-linux-gnu-opt/genfiles/external/opencv/include -isystem external/com_github_gflags_gflags/include -isystem bazel-out/aarch64-linux-gnu-opt/genfiles/external/com_github_gflags_gflags/include -DAIBENCH_ENABLE_TFLITE -no-canonical-prefixes -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c aibench/benchmark/benchmark_main.cc -o bazel-out/aarch64-linux-gnu-opt/bin/aibench/benchmark/_objs/model_benchmark/aibench/benchmark/benchmark_main.o).
    In file included from aibench/benchmark/benchmark_main.cc:41:0:
    ./aibench/executors/tflite/tflite_executor.h:24:10: fatal error: tensorflow/lite/interpreter.h: No such file or directory
    #include "tensorflow/lite/interpreter.h"
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    compilation terminated.
    Target //aibench/benchmark:model_benchmark failed to build
    INFO: Elapsed time: 0.917s, Critical Path: 0.20s
    Traceback (most recent call last):
    File "/home/ubuntu/workspace/benchmark_tool/source_code/mobile-ai-bench/bazel-bin/aibench/python/benchmark.runfiles/aibench/aibench/python/benchmark.py", line 354, in <module>
    main(unused_args=[sys.argv[0]] + unparsed)
    File "/home/ubuntu/workspace/benchmark_tool/source_code/mobile-ai-bench/bazel-bin/aibench/python/benchmark.runfiles/aibench/aibench/python/benchmark.py", line 343, in main
    benchmark_option, benchmark_list, result_files,)
    File "/home/ubuntu/workspace/benchmark_tool/source_code/mobile-ai-bench/bazel-bin/aibench/python/benchmark.runfiles/aibench/aibench/python/benchmark.py", line 261, in run_on_device
    avail_device_types)
    File "/home/ubuntu/workspace/benchmark_tool/source_code/mobile-ai-bench/aibench/python/bench_engine.py", line 117, in bazel_build
    *bazel_args)
    File "/home/ubuntu/anaconda3/envs/mobile-ai-bench_env/lib/python2.7/site-packages/sh.py", line 1413, in __call__
    raise exc
    sh.ErrorReturnCode_1: 
    
    RAN: /home/ubuntu/bin/bazel build //aibench/benchmark:model_benchmark --config aarch64_linux_gnu --cpu=aarch64 --action_env=ANDROID_NDK_HOME=/home/ubuntu/Downloads/DeepLearning/android-ndk-r17c --define tflite=true
    
    STDOUT:
    
    STDERR:

2. If I want to run the mobile-ai-bench benchmark in Huawei  Ascend 310 , could you give some guide to code.
* The mobile-ai-bench could support Ascend 310  based on the current code.
* If could, the work  I should  do 

Thanks for your time.
lee-bin commented 4 years ago
  1. We only built the arm64-v8a and armeabi-v7a version of tf lite. If you need the aarch64 version, you can try building it first. And contribution welcome!
  2. @lu229 Can you comment on this one?
lu229 commented 4 years ago

@TUT-jiayou as @lee-bin said, we don't build aarch65 of tf lite, perhaps you can use the old version 494e2ad761446984065696d8d82a889ab696620d to build the aarch64, in the future we will support aarch64 again

WangFengtu1996 commented 4 years ago

@lu229 commit id ? I don't commit the code. How can I get the software package TFLITE aarch64 ?

lu229 commented 4 years ago

@TUT-jiayou sorry, please ignor the comment that I have delete.

WangFengtu1996 commented 4 years ago

@lu229 @lee-bin Hi, all If I want to run the mobile-ai-bench benchmark in Huawei Ascend 310 , could you give some guide to code.

lu229 commented 4 years ago

@TUT-jiayou Sorry I missed this problem,perhaps you can reference the code in /mobile-ai-bench/aibench/executors/hiai, which support the HIAI of huawei, I find that huawei has developed Ascend Developer Kit for Ascend 310, do you use this Ascend Developer Kit?

WangFengtu1996 commented 4 years ago

@lu229 Thanks for your time. The Atlas 200 DK use different deep learning inference Engine DDK is different the HiAI. So, I want to know other information I should to be careful except information from the README.

lu229 commented 4 years ago

@TUT-jiayou Sorry there is no other information I could supply, perhaps you could reference the the code in /mobile-ai-bench/aibench/executors/hiai and implete an Altas executor, which is encapsulation for AI engine in mobile-ai-bench

WangFengtu1996 commented 4 years ago

@lu229 Hi, About the build the TFLITE for aarch64 , I just find the doc Build TensorFlow Lite for ARM64 boards,that describes how to build the TensorFlow Lite static library for ARM64-based computers, Are there difference between static library and dynamic library, could you give me some guide Thanks for your time.

lu229 commented 4 years ago

@TUT-jiayou the arm64 is aarch64. static library and dynamic library are different, you can reference:https://medium.com/@StueyGK/static-libraries-vs-dynamic-libraries-af78f0b5f1e4