yuenshome / yuenshome.github.io

https://yuenshome.github.io
MIT License
81 stars 15 forks source link

【竞品调研】MindSpore Lite从编译库、模型转换到ADB环境Benchmark #131

Open ysh329 opened 3 years ago

ysh329 commented 3 years ago
ysh329 commented 3 years ago

MindSpore Lite从编译安装到ADB环境Benchmark

mindspore的Github Release页面不提供编译好的binary,因此需要从源码编译,后来在官网文档发现有编译好的预编译库和模型转换工具。具体的依赖在官网有详细的说明和链接跳转:https://www.mindspore.cn/tutorial/lite/zh-CN/master/use/build.html。mindspore lite库还好说,但是converter依赖的东西就较多。这里简单罗列:

模型转换工具的编译安装

安装完上述依赖后,便可以开始编译converter,默认编译X86就有模型转换工具,而编译端侧库是没有的,执行下面命令开始编译x86_64架构Release版本,编译模型转换、基准测试和库裁剪工具:

bash build.sh -I x86_64

编译成功显示如下日志
[100%] Built target lite-test
Run CPack packaging tool...
CPack: Create package using TGZ
CPack: Install projects
CPack: - Run preinstall target for: Lite
CPack: - Install project: Lite []
CPack: -   Install component: inference-linux-x64
CPack: -   Install component: converter-linux-x64
CPack: Create package
CPack: - package: /home/xxxx/code/mindspore/output/tmp/mindspore-lite-1.1.0-converter-linux-x64.tar.gz generated.
CPack: - checksum file: /home/xxxx/code/mindspore/output/tmp/mindspore-lite-1.1.0-converter-linux-x64.tar.gz.sha256 generated.
CPack: - package: /home/xxxx/code/mindspore/output/tmp/mindspore-lite-1.1.0-inference-linux-x64.tar.gz generated.
CPack: - checksum file: /home/xxxx/code/mindspore/output/tmp/mindspore-lite-1.1.0-inference-linux-x64.tar.gz.sha256 generated.
---------------- mindspore lite: build success ----------------

但在运行converter时会提示如下报错:

./converter: error while loading shared libraries: libglog.so.0: cannot open shared object file: No such file or directory

将output目录下的mindspore-lite-1.1.0-converter-linux-x64.tar.gz解压,然后进入mindspore-lite-1.1.0-converter-linux-x64/converter,直接执行./converter会提示缺少shared lib,根据提示在上层目录的third-party和lib目录下把对应的.so全部拷贝到converter目录,然后执行export LD_LIBRARY_PATH=.,再执行./converter就能显现出命令使用帮助了。

下面将caffe和tflite格式的mobilenetv1和v2转换为mindspore lite模型格式:

# note: https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/converter_tool.html

prefix_path="/home/yuanshuai/code/tmp/embedded-ai.bench/"
# caffe
./converter_lite --fmk=CAFFE --modelFile=${prefix_path}/models/caffe_mobilenetv1.prototxt --weightFile=${prefix_path}/models/caffe_mobilenetv1.caffemodel --outputFile=caffe_mobilenetv1
./converter_lite --fmk=CAFFE --modelFile=${prefix_path}/models/caffe_mobilenetv2.prototxt --weightFile=${prefix_path}/models/caffe_mobilenetv2.caffemodel --outputFile=caffe_mobilenetv2

# tflite
./converter_lite --fmk=TFLITE --modelFile=${prefix_path}/models/tf_mobilenetv1/tf_mobilenetv1.tflite --outputFile=tf_mobilenetv1
./converter_lite --fmk=TFLITE --modelFile=${prefix_path}/models/tf_mobilenetv2/tf_mobilenetv2.tflite --outputFile=tf_mobilenetv2

转换成功得到下面4个模型:

  1. caffe_mobilenetv1.ms
  2. caffe_mobilenetv2.ms
  3. tf_mobilenetv1.ms
  4. tf_mobilenetv2.ms

编译安卓CPU+GPU库

默认编译GPU是带有CPU算子的,执行下面命令编译GPU mindspore lite库:

bash build.sh -I arm64 -e gpu
bash build.sh -I arm32 -e gpu

编译完成会在output目录下显示两个文件夹:

  1. mindspore-lite-1.1.0-inference-android-aarch32.tar.gz
  2. mindspore-lite-1.1.0-inference-android-aarch64.tar.gz

运行benchmark

解压上面两个压缩包后,里面有一个可执行文件bin,以及mindspore_lite.so文件,将其adb push到手机上,顺带前面转好的模型,执行跑一下benchmark:


$ADB mkdir -p /data/local/tmp/mindspore

$ADB push ./${prefix_path}/benchmark/benchmark /data/local/tmp/mindspore/benchmark
$ADB push ./${prefix_path}/lib/libmindspore-lite.so /data/local/tmp/mindspore/libmi
$ADB push ./${prefix_path}/lib/liboptimize.so /data/local/tmp/mindspore/liboptimize
$ADB push ./${prefix_path}/lib/libc++_shared.so /data/local/tmp/mindspore/libc++_sh

$ADB push ./tf_mobilenetv1.ms /data/local/tmp/mindspore/tf_mobilenetv1.ms
$ADB push ./tf_mobilenetv2.ms /data/local/tmp/mindspore/tf_mobilenetv2.ms
$ADB push ./caffe_mobilenetv1.ms /data/local/tmp/mindspore/caffe_mobilenetv1.ms
$ADB push ./caffe_mobilenetv2.ms /data/local/tmp/mindspore/caffe_mobilenetv2.ms

$ADB shell chmod +x /data/local/tmp/mindspore/benchmark
$ADB shell /data/local/tmp/mindspore/benchmark

# https://www.mindspore.cn/lite/tutorial/zh-CN/master/use/evaluating_the_model.html
#$ADB shell "export LD_LIBRARY_PATH=/data/local/tmp/mindspore/; \
#  /data/local/tmp/mindspore/benchmark \
#  --bleFp16mbleFp16odelPath=<xxx> \
#  --device=<CPU|GPU> \
#  --cpuBindMode=<-1:midcore, 1: bigcore, 0:nobind> \
#  --numThreads=<2> \
#  --loopCount=10 \
#  --warmUpLoopCount=3 \
#  --enableFp16=<false|true>"

$ADB shell "export LD_LIBRARY_PATH=/data/local/tmp/mindspore/; \
  /data/local/tmp/mindspore/benchmark \
  --modelFile=/data/local/tmp/mindspore/tf_mobilenetv1.ms \
  --device=GPU \
  --cpuBindMode=1 \
  --numThreads=1 \
  --loopCount=1000 \
  --warmUpLoopCount=20 \
  --enableFp16=true"
benchmark / kirin820 (ms) tf_mobilenetv1 tf_mobilenetv2 tf_mobilenetv3_small tf_mobilenetv3_large caffe_mobilenetv1 caffe_mobilenetv2
armv8-st-bigcore-cpu-fp16 21.8 16.0 6.0 15.3 Inference error -1 Inference error -1
armv8-st-bigcore-gpu 11.2 10.4 7.2 12.3 13.4 15.3