./main [input]
- input = blank
- use the default image file set in source code (main.cpp)
- e.g. ./main
- input = *.mp4, *.avi, *.webm
- use video file
- e.g. ./main test.mp4
- input = *.jpg, *.png, *.bmp
- use image file
- e.g. ./main test.jpg
- input = number (e.g. 0, 1, 2, ...)
- use camera
- e.g. ./main 0
git clone https://github.com/iwatake2222/play_with_tflite.git
cd play_with_tflite
git submodule update --init
sh InferenceHelper/third_party/download_prebuilt_libraries.sh
sh ./download_resource.sh
cd pj_tflite_cls_mobilenet_v2 # for example
mkdir -p build && cd build
cmake ..
make
./main
Where is the source code
: path-to-play_with_tflite/pj_tflite_cls_mobilenet_v2 (for example)Where to build the binaries
: path-to-build (any)main.sln
main
project as a startup project, then build and run!resource
directory to /storage/emulated/0/Android/data/com.iwatake.viewandroidtflite/files/Documents/resource
ViewAndroid\app\src\main\cpp\CMakeLists.txt
to select a image processor you want to use
set(ImageProcessor_DIR "${CMAKE_CURRENT_LIST_DIR}/../../../../../pj_tflite_cls_mobilenet_v2/image_processor")
pj_tflite_cls_mobilenet_v2
to anotherInferenceHelper::TENSORFLOW_LITE_DELEGATE_XNNPACK
is used. You can modify ViewAndroid\app\src\main\cpp\CMakeLists.txt
to select which delegate to use. It's better to use InferenceHelper::TENSORFLOW_LITE_GPU
to get high performance.
InferenceHelper::create
.# Edge TPU
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=on -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off
cp libedgetpu.so.1.0 libedgetpu.so.1
#export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`pwd`
sudo LD_LIBRARY_PATH=./ ./main
# you may get "Segmentation fault (core dumped)" without sudo
# GPU
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=on -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off
# you may need `sudo apt install ocl-icd-opencl-dev` or `sudo apt install libgles2-mesa-dev`
# XNNPACK
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=on
# NNAPI (Note: You use Android for NNAPI. Therefore, you will modify CMakeLists.txt in Android Studio rather than the following command)
cmake .. -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_EDGETPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_GPU=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_XNNPACK=off -DINFERENCE_HELPER_ENABLE_TFLITE_DELEGATE_NNAPI=on
You also need to select framework when calling InferenceHelper::create
.
edgetpu_runtime_20210726.zip
, uninstall it. Also uninstall UsbDk Runtime Libraries
from Windows. Then install edgetpu_runtime_20210119.zip
C:\Windows\System32\edgetpu.dll
to ensure the program uses our pre-built libraryYou may need something like the following commands to run the app
cp libedgetpu.so.1.0 libedgetpu.so.1
sudo LD_LIBRARY_PATH=./ ./main
# You may also need the followings beforehand so that you can run X app with sudo via SSH
touch ~/.Xauthority
xauth generate :0 . trusted
xauth nlist $DISPLAY | sudo xauth nmerge -
By default, NNAPI will select the most appropreate accelerator for the model. You can specify which accelerator to use by yourself. Modify the following code in InferenceHelperTensorflowLite.cpp
// options.accelerator_name = "qti-default";
// options.accelerator_name = "qti-dsp";
// options.accelerator_name = "qti-gpu";
model_information.md
in resource.zip