lp6m / yolov5s_android

Run yolov5s on Android device!
GNU General Public License v3.0
307 stars 76 forks source link

yolov5s_android:rocket:

The implementation of yolov5s on android for the yolov5s export contest.
Download the latest android apk from release and install your device.

UPDATE:rocket: 2022/06/25 Added tutorial on how to integrate models trained with custom data. Custom Model Intergration Tutorial

Environment

We use docker container for host evaluation and model conversion.

git clone --recursive https://github.com/lp6m/yolov5s_android
cd yolov5s_android
docker build ./ -f ./docker/Dockerfile  -t yolov5s_android
docker run -it --gpus all -v `pwd`:/workspace yolov5s_android bash

Files

Performance

Latency

These results are measured on Xiaomi Mi11.
Please refer benchmark/README.md about the detail of benchmark command.
The latency does not contain the pre/post processing time and data transfer time.

float32 model

delegate 640x640 [ms] 320x320 [ms]
None (CPU) 249 61
NNAPI (qti-gpu, fp32) 156 112
NNAPI (qti-gpu, fp16) 92 79

int8 model

We tried to accelerate the inference process by using NNAPI (qti-dsp) and offload calculation to Hexagon DSP, but it doesn't work for now. Please see here in detail.

delegate 640x640 [ms] 320x320 [ms]
None (CPU) 95 23
NNAPI (qti-default) Not working Not working
NNAPI (qti-dsp) Not working Not working

Accuracy

Please refer host/README.md about the evaluation method.
We set conf_thresh=0.25 and iou_thresh=0.45 for nms parameter.
device, model, delegate 640x640 mAP 320x320 mAP
host GPU (Tflite + PyTorch, fp32) 27.8 26.6
host CPU (Tflite + PyTorch, int8) 26.6 25.5
NNAPI (qti-gpu, fp16) 28.5 26.8
CPU (int8) 27.2 25.8

Model conversion

This project focuses on obtaining a tflite model by model conversion from PyTorch original implementation, rather than doing own implementation in tflite.
We convert models in this way: PyTorch -> ONNX -> OpenVino -> TfLite.
To convert the model from OpenVino to TfLite, we use openvino2tensorflow. Please refer convert_model/README.md about the model conversion.