The implementation of yolov5s on android for the yolov5s export contest.
Download the latest android apk from release and install your device.
UPDATE:rocket: 2022/06/25 Added tutorial on how to integrate models trained with custom data. Custom Model Intergration Tutorial
We use docker container for host evaluation and model conversion.
git clone --recursive https://github.com/lp6m/yolov5s_android
cd yolov5s_android
docker build ./ -f ./docker/Dockerfile -t yolov5s_android
docker run -it --gpus all -v `pwd`:/workspace yolov5s_android bash
./app
./tflite_model/*.tflite
to app/tflite_yolov5_test/app/src/main/assets/
, and build on Android Studio../benchmark
./convert_model
./docker
./host
detect.py
: Run detection for image with TfLite model on host environment.evaluate.py
: Run evaluation with coco validation dataset and inference results../tflite_model
These results are measured on Xiaomi Mi11
.
Please refer benchmark/README.md
about the detail of benchmark command.
The latency does not contain the pre/post processing time and data transfer time.
delegate | 640x640 [ms] | 320x320 [ms] |
---|---|---|
None (CPU) | 249 | 61 |
NNAPI (qti-gpu, fp32) | 156 | 112 |
NNAPI (qti-gpu, fp16) | 92 | 79 |
We tried to accelerate the inference process by using NNAPI (qti-dsp)
and offload calculation to Hexagon DSP, but it doesn't work for now. Please see here in detail.
delegate | 640x640 [ms] | 320x320 [ms] |
---|---|---|
None (CPU) | 95 | 23 |
NNAPI (qti-default) | Not working | Not working |
NNAPI (qti-dsp) | Not working | Not working |
Please refer host/README.md about the evaluation method. We set conf_thresh=0.25 and iou_thresh=0.45 for nms parameter. |
device, model, delegate | 640x640 mAP | 320x320 mAP |
---|---|---|---|
host GPU (Tflite + PyTorch, fp32) | 27.8 | 26.6 | |
host CPU (Tflite + PyTorch, int8) | 26.6 | 25.5 | |
NNAPI (qti-gpu, fp16) | 28.5 | 26.8 | |
CPU (int8) | 27.2 | 25.8 |
This project focuses on obtaining a tflite model by model conversion from PyTorch original implementation, rather than doing own implementation in tflite.
We convert models in this way: PyTorch -> ONNX -> OpenVino -> TfLite
.
To convert the model from OpenVino to TfLite, we use openvino2tensorflow.
Please refer convert_model/README.md about the model conversion.