DefTruth / lite.ai.toolkit

πŸ›  A lite C++ toolkit of awesome AI models, support ONNXRuntime, MNN. Contains YOLOv5, YOLOv6, YOLOX, YOLOv8, FaceDet, HeadSeg, HeadPose, Matting etc. Engine: ONNXRuntime, MNN.
https://github.com/DefTruth/lite.ai.toolkit
GNU General Public License v3.0
3.53k stars 672 forks source link
mnn modnet nanodet ncnn onnx onnxruntime retinaface robustvideomatting scrfd segmentation tnn yolop yolor yolov5 yolov6 yolov7 yolov8 yolox yolox-nano yolox-tiny

logo-v3

πŸ› Lite.Ai.ToolKit: A lite C++ toolkit of awesome AI models, such as Object Detection, Face Detection, Face Recognition, Segmentation, Matting, etc. See Model Zoo and ONNX Hub, MNN Hub, TNN Hub, NCNN Hub.

News πŸ‘‡πŸ‘‡

Most of my time now is focused on LLM/VLM Inference. Please check πŸ“–Awesome-LLM-Inference , πŸ“–Awesome-SD-Distributed-Inference and πŸ“–CUDA-Learn-Notes for more details.

Features πŸ‘πŸ‘‹

Build πŸ‘‡πŸ‘‡

Download prebuilt lite.ai.toolkit library from tag/v0.2.0, or just build it from source:

git clone --depth=1 https://github.com/DefTruth/lite.ai.toolkit.git  # latest
cd lite.ai.toolkit && sh ./build.sh # >= 0.2.0, support Linux only, tested on Ubuntu 20.04.6 LTS

Quick Start 🌟🌟

Example0: Object Detection using YOLOv5. Download model from Model-Zoo2.

#include "lite/lite.h"

int main(int argc, char *argv[]) {
  std::string onnx_path = "yolov5s.onnx";
  std::string test_img_path = "test_yolov5.jpg";
  std::string save_img_path = "test_results.jpg";

  auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); 
  std::vector<lite::types::Boxf> detected_boxes;
  cv::Mat img_bgr = cv::imread(test_img_path);
  yolov5->detect(img_bgr, detected_boxes);

  lite::utils::draw_boxes_inplace(img_bgr, detected_boxes);
  cv::imwrite(save_img_path, img_bgr);  
  delete yolov5;
  return 0;
}

You can download the prebuilt lite.ai.tooklit library and test resources from tag/v0.2.0.

export LITE_AI_TAG_URL=https://github.com/DefTruth/lite.ai.toolkit/releases/download/v0.2.0
wget ${LITE_AI_TAG_URL}/lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz
wget ${LITE_AI_TAG_URL}/yolov5s.onnx && wget ${LITE_AI_TAG_URL}/test_yolov5.jpg

Quick Setup πŸ‘€

To quickly setup lite.ai.toolkit, you can follow the CMakeLists.txt listed as belows. πŸ‘‡πŸ‘€

set(lite.ai.toolkit_DIR YOUR-PATH-TO-LITE-INSTALL)
find_package(lite.ai.toolkit REQUIRED PATHS ${lite.ai.toolkit_DIR})
add_executable(lite_yolov5 test_lite_yolov5.cpp)
target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS})

Mixed with MNN or ONNXRuntime πŸ‘‡πŸ‘‡

The goal of lite.ai.toolkit is not to abstract on top of MNN and ONNXRuntime. So, you can use lite.ai.toolkit mixed with MNN(-DENABLE_MNN=ON, default OFF) or ONNXRuntime(-DENABLE_ONNXRUNTIME=ON, default ON). The lite.ai.toolkit installation package contains complete MNN and ONNXRuntime. The workflow may looks like:

#include "lite/lite.h"
// 0. use yolov5 from lite.ai.toolkit to detect objs.
auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path);
// 1. use OnnxRuntime or MNN to implement your own classfier.
interpreter = std::shared_ptr<MNN::Interpreter>(MNN::Interpreter::createFromFile(mnn_path));
// or: session = new Ort::Session(ort_env, onnx_path, session_options);
classfier = interpreter->createSession(schedule_config);
// 2. then, classify the detected objs use your own classfier ...

The included headers of MNN and ONNXRuntime can be found at mnn_config.h and ort_config.h.

πŸ”‘οΈ Check the detailed Quick Start!Click here! ### Download resources You can download the prebuilt lite.ai.tooklit library and test resources from [tag/v0.2.0](https://github.com/DefTruth/lite.ai.toolkit/releases/tag/v0.2.0). ```bash export LITE_AI_TAG_URL=https://github.com/DefTruth/lite.ai.toolkit/releases/download/v0.2.0 wget ${LITE_AI_TAG_URL}/lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz wget ${LITE_AI_TAG_URL}/yolov5s.onnx && wget ${LITE_AI_TAG_URL}/test_yolov5.jpg tar -zxvf lite-ort1.17.1+ocv4.9.0+ffmpeg4.2.2-linux-x86_64.tgz ``` ### Write test code write YOLOv5 example codes and name it `test_lite_yolov5.cpp`: ```c++ #include "lite/lite.h" int main(int argc, char *argv[]) { std::string onnx_path = "yolov5s.onnx"; std::string test_img_path = "test_yolov5.jpg"; std::string save_img_path = "test_results.jpg"; auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); std::vector detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolov5->detect(img_bgr, detected_boxes); lite::utils::draw_boxes_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete yolov5; return 0; } ``` ### Setup CMakeLists.txt ```cmake cmake_minimum_required(VERSION 3.10) project(lite_yolov5) set(CMAKE_CXX_STANDARD 17) set(lite.ai.toolkit_DIR YOUR-PATH-TO-LITE-INSTALL) find_package(lite.ai.toolkit REQUIRED PATHS ${lite.ai.toolkit_DIR}) if (lite.ai.toolkit_Found) message(STATUS "lite.ai.toolkit_INCLUDE_DIRS: ${lite.ai.toolkit_INCLUDE_DIRS}") message(STATUS " lite.ai.toolkit_LIBS: ${lite.ai.toolkit_LIBS}") message(STATUS " lite.ai.toolkit_LIBS_DIRS: ${lite.ai.toolkit_LIBS_DIRS}") endif() add_executable(lite_yolov5 test_lite_yolov5.cpp) target_link_libraries(lite_yolov5 ${lite.ai.toolkit_LIBS}) ``` ### Build example ```bash mkdir build && cd build && cmake .. && make -j1 ``` Then, export the lib paths to `LD_LIBRARY_PATH` which listed by `lite.ai.toolkit_LIBS_DIRS`. ```bash export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/third_party/opencv/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/third_party/onnxruntime/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=YOUR-PATH-TO-LITE-INSTALL/third_party/MNN/lib:$LD_LIBRARY_PATH # if -DENABLE_MNN=ON ``` ### Run binary: ```bash cp ../yolov5s.onnx ../test_yolov.jpg . ./lite_yolov5 ``` The output logs: ```bash LITEORT_DEBUG LogId: ../examples/hub/onnx/cv/yolov5s.onnx =============== Input-Dims ============== Name: images Dims: 1 Dims: 3 Dims: 640 Dims: 640 =============== Output-Dims ============== Output: 0 Name: pred Dim: 0 :1 Output: 0 Name: pred Dim: 1 :25200 Output: 0 Name: pred Dim: 2 :85 Output: 1 Name: output2 Dim: 0 :1 ...... Output: 3 Name: output4 Dim: 1 :3 Output: 3 Name: output4 Dim: 2 :20 Output: 3 Name: output4 Dim: 3 :20 Output: 3 Name: output4 Dim: 4 :85 ======================================== detected num_anchors: 25200 generate_bboxes num: 48 ```

Supported Models Matrix

Class Size Type Demo ONNXRuntime MNN NCNN TNN Linux MacOS Windows Android
YoloV5 28M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
YoloV3 236M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
TinyYoloV3 33M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
YoloV4 176M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
SSD 76M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
SSDMobileNetV1 27M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
YoloX 3.5M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
TinyYoloV4VOC 22M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
TinyYoloV4COCO 22M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
YoloR 39M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
ScaledYoloV4 270M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
EfficientDet 15M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
EfficientDetD7 220M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
EfficientDetD8 322M detection demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
YOLOP 30M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
NanoDet 1.1M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
NanoDetPlus 4.5M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
NanoDetEffi... 12M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
YoloX_V_0_1_1 3.5M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
YoloV5_V_6_0 7.5M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
GlintArcFace 92M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
GlintCosFace 92M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
GlintPartialFC 170M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
FaceNet 89M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
FocalArcFace 166M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
FocalAsiaArcFace 166M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
TencentCurricularFace 249M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
TencentCifpFace 130M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
CenterLossFace 280M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
SphereFace 80M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
PoseRobustFace 92M faceid demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
NaivePoseRobustFace 43M faceid demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
MobileFaceNet 3.8M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
CavaGhostArcFace 15M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
CavaCombinedFace 250M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
MobileSEFocalFace 4.5M faceid demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
RobustVideoMatting 14M matting demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ ❔
MGMatting 113M matting demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ /
MODNet 24M matting demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
MODNetDyn 24M matting demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
BackgroundMattingV2 20M matting demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ /
BackgroundMattingV2Dyn 20M matting demo βœ… / / / βœ… βœ”οΈ βœ”οΈ /
UltraFace 1.1M face::detect demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
RetinaFace 1.6M face::detect demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
FaceBoxes 3.8M face::detect demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
FaceBoxesV2 3.8M face::detect demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
SCRFD 2.5M face::detect demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
YOLO5Face 4.8M face::detect demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
PFLD 1.0M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
PFLD98 4.8M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
MobileNetV268 9.4M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
MobileNetV2SE68 11M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
PFLD68 2.8M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
FaceLandmark1000 2.0M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
PIPNet98 44.0M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
PIPNet68 44.0M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
PIPNet29 44.0M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
PIPNet19 44.0M face::align demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
FSANet 1.2M face::pose demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ ❔
AgeGoogleNet 23M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
GenderGoogleNet 23M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
EmotionFerPlus 33M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
VGG16Age 514M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
VGG16Gender 512M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
SSRNet 190K face::attr demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ ❔
EfficientEmotion7 15M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
EfficientEmotion8 15M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
MobileEmotion7 13M face::attr demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
ReXNetEmotion7 30M face::attr demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ /
EfficientNetLite4 49M classification demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ /
ShuffleNetV2 8.7M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
DenseNet121 30.7M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
GhostNet 20M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
HdrDNet 13M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
IBNNet 97M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
MobileNetV2 13M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
ResNet 44M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
ResNeXt 95M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
DeepLabV3ResNet101 232M segmentation demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
FCNResNet101 207M segmentation demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ /
FastStyleTransfer 6.4M style demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
Colorizer 123M colorization demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ /
SubPixelCNN 234K resolution demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ ❔
SubPixelCNN 234K resolution demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ ❔
InsectDet 27M detection demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ ❔
InsectID 22M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ βœ”οΈ ❔
PlantID 30M classification demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ βœ”οΈ ❔
YOLOv5BlazeFace 3.4M face::detect demo βœ… βœ… / / βœ… βœ”οΈ βœ”οΈ ❔
YoloV5_V_6_1 7.5M detection demo βœ… βœ… / / βœ… βœ”οΈ βœ”οΈ ❔
HeadSeg 31M segmentation demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ ❔
FemalePhoto2Cartoon 15M style demo βœ… βœ… / βœ… βœ… βœ”οΈ βœ”οΈ ❔
FastPortraitSeg 400k segmentation demo βœ… βœ… / / βœ… βœ”οΈ βœ”οΈ ❔
PortraitSegSINet 380k segmentation demo βœ… βœ… / / βœ… βœ”οΈ βœ”οΈ ❔
PortraitSegExtremeC3Net 180k segmentation demo βœ… βœ… / / βœ… βœ”οΈ βœ”οΈ ❔
FaceHairSeg 18M segmentation demo βœ… βœ… / / βœ… βœ”οΈ βœ”οΈ ❔
HairSeg 18M segmentation demo βœ… βœ… / / βœ… βœ”οΈ βœ”οΈ ❔
MobileHumanMatting 3M matting demo βœ… βœ… / / βœ… βœ”οΈ βœ”οΈ ❔
MobileHairSeg 14M segmentation demo βœ… βœ… / / βœ… βœ”οΈ βœ”οΈ ❔
YOLOv6 17M detection demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
FaceParsingBiSeNet 50M segmentation demo βœ… βœ… βœ… βœ… βœ… βœ”οΈ βœ”οΈ ❔
FaceParsingBiSeNetDyn 50M segmentation demo βœ… / / / / βœ”οΈ βœ”οΈ ❔
πŸ”‘οΈ Model Zoo!Click here! ## Model Zoo.
**Lite.Ai.ToolKit** contains almost **[100+](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.onnx.md)** AI models with **[500+](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.onnx.md)** frozen pretrained files now. Most of the files are converted by myself. You can use it through **lite::cv::Type::Class** syntax, such as **[lite::cv::detection::YoloV5](#lite.ai.toolkit-object-detection)**. More details can be found at [Examples for Lite.Ai.ToolKit](#lite.ai.toolkit-Examples-for-Lite.AI.ToolKit). Note, for Google Drive, I can not upload all the *.onnx files because of the storage limitation (15G). | File | Baidu Drive | Google Drive | Docker Hub | Hub (Docs) | |:----:|:-------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:| | ONNX | [Baidu Drive](https://pan.baidu.com/s/1elUGcx7CZkkjEoYhTMwTRQ) code: 8gin | [Google Drive](https://drive.google.com/drive/folders/1p6uBcxGeyS1exc-T61vL8YRhwjYL4iD2?usp=sharing) | [ONNX Docker v0.1.22.01.08 (28G), v0.1.22.02.02 (400M)](https://hub.docker.com/r/qyjdefdocker/lite.ai.toolkit-onnx-hub/tags) | [ONNX Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.onnx.md) | | MNN | [Baidu Drive](https://pan.baidu.com/s/1KyO-bCYUv6qPq2M8BH_Okg) code: 9v63 | ❔ | [MNN Docker v0.1.22.01.08 (11G), v0.1.22.02.02 (213M)](https://hub.docker.com/r/qyjdefdocker/lite.ai.toolkit-mnn-hub/tags) | [MNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.mnn.md) | | NCNN | [Baidu Drive](https://pan.baidu.com/s/1hlnqyNsFbMseGFWscgVhgQ) code: sc7f | ❔ | [NCNN Docker v0.1.22.01.08 (9G), v0.1.22.02.02 (197M)](https://hub.docker.com/r/qyjdefdocker/lite.ai.toolkit-ncnn-hub/tags) | [NCNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.ncnn.md) | | TNN | [Baidu Drive](https://pan.baidu.com/s/1lvM2YKyUbEc5HKVtqITpcw) code: 6o6k | ❔ | [TNN Docker v0.1.22.01.08 (11G), v0.1.22.02.02 (217M)](https://hub.docker.com/r/qyjdefdocker/lite.ai.toolkit-tnn-hub/tags) | [TNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.tnn.md) | ```shell docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08 # (28G) docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08 # (11G) docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08 # (9G) docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08 # (11G) docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.02.02 # (400M) + YOLO5Face docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.02.02 # (213M) + YOLO5Face docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.02.02 # (197M) + YOLO5Face docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.02.02 # (217M) + YOLO5Face ``` ### πŸ”‘οΈ How to download Model Zoo from Docker Hub? * Firstly, pull the image from docker hub. ```shell docker pull qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08 # (11G) docker pull qyjdefdocker/lite.ai.toolkit-ncnn-hub:v0.1.22.01.08 # (9G) docker pull qyjdefdocker/lite.ai.toolkit-tnn-hub:v0.1.22.01.08 # (11G) docker pull qyjdefdocker/lite.ai.toolkit-onnx-hub:v0.1.22.01.08 # (28G) ``` * Secondly, run the container with local `share` dir using `docker run -idt xxx`. A minimum example will show you as follows. * make a `share` dir in your local device. ```shell mkdir share # any name is ok. ``` * write `run_mnn_docker_hub.sh` script like: ```shell #!/bin/bash PORT1=6072 PORT2=6084 SERVICE_DIR=/Users/xxx/Desktop/your-path-to/share CONRAINER_DIR=/home/hub/share CONRAINER_NAME=mnn_docker_hub_d docker run -idt -p ${PORT2}:${PORT1} -v ${SERVICE_DIR}:${CONRAINER_DIR} --shm-size=16gb --name ${CONRAINER_NAME} qyjdefdocker/lite.ai.toolkit-mnn-hub:v0.1.22.01.08 ``` * Finally, copy the model weights from `/home/hub/mnn/cv` to your local `share` dir. ```shell # activate mnn docker. sh ./run_mnn_docker_hub.sh docker exec -it mnn_docker_hub_d /bin/bash # copy the models to the share dir. cd /home/hub cp -rf mnn/cv share/ ``` ### Model Hubs The pretrained and converted ONNX files provide by lite.ai.toolkit are listed as follows. Also, see [Model Zoo](#lite.ai.toolkit-Model-Zoo) and [ONNX Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.onnx.md), [MNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.mnn.md), [TNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.tnn.md), [NCNN Hub](https://github.com/DefTruth/lite.ai.toolkit/tree/main/docs/hub/lite.ai.toolkit.hub.ncnn.md) for more details.
πŸ”‘οΈ More Examples!Click here! ## Examples. More examples can be found at [examples](https://github.com/DefTruth/lite.ai.toolkit/tree/main/examples/lite/cv).
#### Example0: Object Detection using [YOLOv5](https://github.com/ultralytics/yolov5). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/yolov5s.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_yolov5_1.jpg"; std::string save_img_path = "../../../examples/logs/test_lite_yolov5_1.jpg"; auto *yolov5 = new lite::cv::detection::YoloV5(onnx_path); std::vector detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); yolov5->detect(img_bgr, detected_boxes); lite::utils::draw_boxes_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete yolov5; } ``` The output is:
Or you can use Newest πŸ”₯πŸ”₯ ! YOLO series's detector [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX) or [YoloR](https://github.com/WongKinYiu/yolor). They got the similar results. More classes for general object detection (80 classes, COCO). ```c++ auto *detector = new lite::cv::detection::YoloX(onnx_path); // Newest YOLO detector !!! 2021-07 auto *detector = new lite::cv::detection::YoloV4(onnx_path); auto *detector = new lite::cv::detection::YoloV3(onnx_path); auto *detector = new lite::cv::detection::TinyYoloV3(onnx_path); auto *detector = new lite::cv::detection::SSD(onnx_path); auto *detector = new lite::cv::detection::YoloV5(onnx_path); auto *detector = new lite::cv::detection::YoloR(onnx_path); // Newest YOLO detector !!! 2021-05 auto *detector = new lite::cv::detection::TinyYoloV4VOC(onnx_path); auto *detector = new lite::cv::detection::TinyYoloV4COCO(onnx_path); auto *detector = new lite::cv::detection::ScaledYoloV4(onnx_path); auto *detector = new lite::cv::detection::EfficientDet(onnx_path); auto *detector = new lite::cv::detection::EfficientDetD7(onnx_path); auto *detector = new lite::cv::detection::EfficientDetD8(onnx_path); auto *detector = new lite::cv::detection::YOLOP(onnx_path); auto *detector = new lite::cv::detection::NanoDet(onnx_path); // Super fast and tiny! auto *detector = new lite::cv::detection::NanoDetPlus(onnx_path); // Super fast and tiny! 2021/12/25 auto *detector = new lite::cv::detection::NanoDetEfficientNetLite(onnx_path); // Super fast and tiny! auto *detector = new lite::cv::detection::YoloV5_V_6_0(onnx_path); auto *detector = new lite::cv::detection::YoloV5_V_6_1(onnx_path); auto *detector = new lite::cv::detection::YoloX_V_0_1_1(onnx_path); // Newest YOLO detector !!! 2021-07 auto *detector = new lite::cv::detection::YOLOv6(onnx_path); // Newest 2022 YOLO detector !!! ``` ****
#### Example1: Video Matting using [RobustVideoMatting2021πŸ”₯πŸ”₯πŸ”₯](https://github.com/PeterL1n/RobustVideoMatting). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/rvm_mobilenetv3_fp32.onnx"; std::string video_path = "../../../examples/lite/resources/test_lite_rvm_0.mp4"; std::string output_path = "../../../examples/logs/test_lite_rvm_0.mp4"; std::string background_path = "../../../examples/lite/resources/test_lite_matting_bgr.jpg"; auto *rvm = new lite::cv::matting::RobustVideoMatting(onnx_path, 16); // 16 threads std::vector contents; // 1. video matting. cv::Mat background = cv::imread(background_path); rvm->detect_video(video_path, output_path, contents, false, 0.4f, 20, true, true, background); delete rvm; } ``` The output is:

More classes for matting (image matting, video matting, trimap/mask-free, trimap/mask-based) ```c++ auto *matting = new lite::cv::matting::RobustVideoMatting:(onnx_path); // WACV 2022. auto *matting = new lite::cv::matting::MGMatting(onnx_path); // CVPR 2021 auto *matting = new lite::cv::matting::MODNet(onnx_path); // AAAI 2022 auto *matting = new lite::cv::matting::MODNetDyn(onnx_path); // AAAI 2022 Dynamic Shape Inference. auto *matting = new lite::cv::matting::BackgroundMattingV2(onnx_path); // CVPR 2020 auto *matting = new lite::cv::matting::BackgroundMattingV2Dyn(onnx_path); // CVPR 2020 Dynamic Shape Inference. auto *matting = new lite::cv::matting::MobileHumanMatting(onnx_path); // 3Mb only !!! ``` ****
#### Example2: 1000 Facial Landmarks Detection using [FaceLandmarks1000](https://github.com/Single430/FaceLandmark1000). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/FaceLandmark1000.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_face_landmarks_0.png"; std::string save_img_path = "../../../examples/logs/test_lite_face_landmarks_1000.jpg"; auto *face_landmarks_1000 = new lite::cv::face::align::FaceLandmark1000(onnx_path); lite::types::Landmarks landmarks; cv::Mat img_bgr = cv::imread(test_img_path); face_landmarks_1000->detect(img_bgr, landmarks); lite::utils::draw_landmarks_inplace(img_bgr, landmarks); cv::imwrite(save_img_path, img_bgr); delete face_landmarks_1000; } ``` The output is:
More classes for face alignment (68 points, 98 points, 106 points, 1000 points) ```c++ auto *align = new lite::cv::face::align::PFLD(onnx_path); // 106 landmarks, 1.0Mb only! auto *align = new lite::cv::face::align::PFLD98(onnx_path); // 98 landmarks, 4.8Mb only! auto *align = new lite::cv::face::align::PFLD68(onnx_path); // 68 landmarks, 2.8Mb only! auto *align = new lite::cv::face::align::MobileNetV268(onnx_path); // 68 landmarks, 9.4Mb only! auto *align = new lite::cv::face::align::MobileNetV2SE68(onnx_path); // 68 landmarks, 11Mb only! auto *align = new lite::cv::face::align::FaceLandmark1000(onnx_path); // 1000 landmarks, 2.0Mb only! auto *align = new lite::cv::face::align::PIPNet98(onnx_path); // 98 landmarks, CVPR2021! auto *align = new lite::cv::face::align::PIPNet68(onnx_path); // 68 landmarks, CVPR2021! auto *align = new lite::cv::face::align::PIPNet29(onnx_path); // 29 landmarks, CVPR2021! auto *align = new lite::cv::face::align::PIPNet19(onnx_path); // 19 landmarks, CVPR2021! ``` ****
#### Example3: Colorization using [colorization](https://github.com/richzhang/colorization). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/eccv16-colorizer.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_colorizer_1.jpg"; std::string save_img_path = "../../../examples/logs/test_lite_eccv16_colorizer_1.jpg"; auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path); cv::Mat img_bgr = cv::imread(test_img_path); lite::types::ColorizeContent colorize_content; colorizer->detect(img_bgr, colorize_content); if (colorize_content.flag) cv::imwrite(save_img_path, colorize_content.mat); delete colorizer; } ``` The output is:

More classes for colorization (gray to rgb) ```c++ auto *colorizer = new lite::cv::colorization::Colorizer(onnx_path); ``` ****
#### Example4: Face Recognition using [ArcFace](https://github.com/deepinsight/insightface/tree/master/recognition/arcface_torch). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/ms1mv3_arcface_r100.onnx"; std::string test_img_path0 = "../../../examples/lite/resources/test_lite_faceid_0.png"; std::string test_img_path1 = "../../../examples/lite/resources/test_lite_faceid_1.png"; std::string test_img_path2 = "../../../examples/lite/resources/test_lite_faceid_2.png"; auto *glint_arcface = new lite::cv::faceid::GlintArcFace(onnx_path); lite::types::FaceContent face_content0, face_content1, face_content2; cv::Mat img_bgr0 = cv::imread(test_img_path0); cv::Mat img_bgr1 = cv::imread(test_img_path1); cv::Mat img_bgr2 = cv::imread(test_img_path2); glint_arcface->detect(img_bgr0, face_content0); glint_arcface->detect(img_bgr1, face_content1); glint_arcface->detect(img_bgr2, face_content2); if (face_content0.flag && face_content1.flag && face_content2.flag) { float sim01 = lite::utils::math::cosine_similarity( face_content0.embedding, face_content1.embedding); float sim02 = lite::utils::math::cosine_similarity( face_content0.embedding, face_content2.embedding); std::cout << "Detected Sim01: " << sim << " Sim02: " << sim02 << std::endl; } delete glint_arcface; } ``` The output is:
> Detected Sim01: 0.721159 Sim02: -0.0626267 More classes for face recognition (face id vector extract) ```c++ auto *recognition = new lite::cv::faceid::GlintCosFace(onnx_path); // DeepGlint(insightface) auto *recognition = new lite::cv::faceid::GlintArcFace(onnx_path); // DeepGlint(insightface) auto *recognition = new lite::cv::faceid::GlintPartialFC(onnx_path); // DeepGlint(insightface) auto *recognition = new lite::cv::faceid::FaceNet(onnx_path); auto *recognition = new lite::cv::faceid::FocalArcFace(onnx_path); auto *recognition = new lite::cv::faceid::FocalAsiaArcFace(onnx_path); auto *recognition = new lite::cv::faceid::TencentCurricularFace(onnx_path); // Tencent(TFace) auto *recognition = new lite::cv::faceid::TencentCifpFace(onnx_path); // Tencent(TFace) auto *recognition = new lite::cv::faceid::CenterLossFace(onnx_path); auto *recognition = new lite::cv::faceid::SphereFace(onnx_path); auto *recognition = new lite::cv::faceid::PoseRobustFace(onnx_path); auto *recognition = new lite::cv::faceid::NaivePoseRobustFace(onnx_path); auto *recognition = new lite::cv::faceid::MobileFaceNet(onnx_path); // 3.8Mb only ! auto *recognition = new lite::cv::faceid::CavaGhostArcFace(onnx_path); auto *recognition = new lite::cv::faceid::CavaCombinedFace(onnx_path); auto *recognition = new lite::cv::faceid::MobileSEFocalFace(onnx_path); // 4.5Mb only ! ``` ****
#### Example5: Face Detection using [SCRFD 2021](https://github.com/deepinsight/insightface/blob/master/detection/scrfd/). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/scrfd_2.5g_bnkps_shape640x640.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_face_detector.jpg"; std::string save_img_path = "../../../examples/logs/test_lite_scrfd.jpg"; auto *scrfd = new lite::cv::face::detect::SCRFD(onnx_path); std::vector detected_boxes; cv::Mat img_bgr = cv::imread(test_img_path); scrfd->detect(img_bgr, detected_boxes); lite::utils::draw_boxes_with_landmarks_inplace(img_bgr, detected_boxes); cv::imwrite(save_img_path, img_bgr); delete scrfd; } ``` The output is:
More classes for face detection (super fast face detection) ```c++ auto *detector = new lite::face::detect::UltraFace(onnx_path); // 1.1Mb only ! auto *detector = new lite::face::detect::FaceBoxes(onnx_path); // 3.8Mb only ! auto *detector = new lite::face::detect::FaceBoxesv2(onnx_path); // 4.0Mb only ! auto *detector = new lite::face::detect::RetinaFace(onnx_path); // 1.6Mb only ! CVPR2020 auto *detector = new lite::face::detect::SCRFD(onnx_path); // 2.5Mb only ! CVPR2021, Super fast and accurate!! auto *detector = new lite::face::detect::YOLO5Face(onnx_path); // 2021, Super fast and accurate!! auto *detector = new lite::face::detect::YOLOv5BlazeFace(onnx_path); // 2021, Super fast and accurate!! ``` ****
#### Example6: Object Segmentation using [DeepLabV3ResNet101](https://pytorch.org/hub/pytorch_vision_deeplabv3_resnet101/). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/deeplabv3_resnet101_coco.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_deeplabv3_resnet101.png"; std::string save_img_path = "../../../examples/logs/test_lite_deeplabv3_resnet101.jpg"; auto *deeplabv3_resnet101 = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path, 16); // 16 threads lite::types::SegmentContent content; cv::Mat img_bgr = cv::imread(test_img_path); deeplabv3_resnet101->detect(img_bgr, content); if (content.flag) { cv::Mat out_img; cv::addWeighted(img_bgr, 0.2, content.color_mat, 0.8, 0., out_img); cv::imwrite(save_img_path, out_img); if (!content.names_map.empty()) { for (auto it = content.names_map.begin(); it != content.names_map.end(); ++it) { std::cout << it->first << " Name: " << it->second << std::endl; } } } delete deeplabv3_resnet101; } ``` The output is:
More classes for object segmentation (general objects segmentation) ```c++ auto *segment = new lite::cv::segmentation::FCNResNet101(onnx_path); auto *segment = new lite::cv::segmentation::DeepLabV3ResNet101(onnx_path); ``` ****
#### Example7: Age Estimation using [SSRNet](https://github.com/oukohou/SSR_Net_Pytorch) . Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/ssrnet.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_ssrnet.jpg"; std::string save_img_path = "../../../examples/logs/test_lite_ssrnet.jpg"; auto *ssrnet = new lite::cv::face::attr::SSRNet(onnx_path); lite::types::Age age; cv::Mat img_bgr = cv::imread(test_img_path); ssrnet->detect(img_bgr, age); lite::utils::draw_age_inplace(img_bgr, age); cv::imwrite(save_img_path, img_bgr); delete ssrnet; } ``` The output is:
More classes for face attributes analysis (age, gender, emotion) ```c++ auto *attribute = new lite::cv::face::attr::AgeGoogleNet(onnx_path); auto *attribute = new lite::cv::face::attr::GenderGoogleNet(onnx_path); auto *attribute = new lite::cv::face::attr::EmotionFerPlus(onnx_path); auto *attribute = new lite::cv::face::attr::VGG16Age(onnx_path); auto *attribute = new lite::cv::face::attr::VGG16Gender(onnx_path); auto *attribute = new lite::cv::face::attr::EfficientEmotion7(onnx_path); // 7 emotions, 15Mb only! auto *attribute = new lite::cv::face::attr::EfficientEmotion8(onnx_path); // 8 emotions, 15Mb only! auto *attribute = new lite::cv::face::attr::MobileEmotion7(onnx_path); // 7 emotions, 13Mb only! auto *attribute = new lite::cv::face::attr::ReXNetEmotion7(onnx_path); // 7 emotions auto *attribute = new lite::cv::face::attr::SSRNet(onnx_path); // age estimation, 190kb only!!! ``` ****
#### Example8: 1000 Classes Classification using [DenseNet](https://pytorch.org/hub/pytorch_vision_densenet/). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/densenet121.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_densenet.jpg"; auto *densenet = new lite::cv::classification::DenseNet(onnx_path); lite::types::ImageNetContent content; cv::Mat img_bgr = cv::imread(test_img_path); densenet->detect(img_bgr, content); if (content.flag) { const unsigned int top_k = content.scores.size(); if (top_k > 0) { for (unsigned int i = 0; i < top_k; ++i) std::cout << i + 1 << ": " << content.labels.at(i) << ": " << content.texts.at(i) << ": " << content.scores.at(i) << std::endl; } } delete densenet; } ``` The output is:
More classes for image classification (1000 classes) ```c++ auto *classifier = new lite::cv::classification::EfficientNetLite4(onnx_path); auto *classifier = new lite::cv::classification::ShuffleNetV2(onnx_path); // 8.7Mb only! auto *classifier = new lite::cv::classification::GhostNet(onnx_path); auto *classifier = new lite::cv::classification::HdrDNet(onnx_path); auto *classifier = new lite::cv::classification::IBNNet(onnx_path); auto *classifier = new lite::cv::classification::MobileNetV2(onnx_path); // 13Mb only! auto *classifier = new lite::cv::classification::ResNet(onnx_path); auto *classifier = new lite::cv::classification::ResNeXt(onnx_path); ``` ****
#### Example9: Head Pose Estimation using [FSANet](https://github.com/omasaht/headpose-fsanet-pytorch). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/fsanet-var.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_fsanet.jpg"; std::string save_img_path = "../../../examples/logs/test_lite_fsanet.jpg"; auto *fsanet = new lite::cv::face::pose::FSANet(onnx_path); cv::Mat img_bgr = cv::imread(test_img_path); lite::types::EulerAngles euler_angles; fsanet->detect(img_bgr, euler_angles); if (euler_angles.flag) { lite::utils::draw_axis_inplace(img_bgr, euler_angles); cv::imwrite(save_img_path, img_bgr); std::cout << "yaw:" << euler_angles.yaw << " pitch:" << euler_angles.pitch << " row:" << euler_angles.roll << std::endl; } delete fsanet; } ``` The output is:
More classes for head pose estimation (euler angle, yaw, pitch, roll) ```c++ auto *pose = new lite::cv::face::pose::FSANet(onnx_path); // 1.2Mb only! ``` ****
#### Example10: Style Transfer using [FastStyleTransfer](https://github.com/onnx/models/tree/master/vision/style_transfer/fast_neural_style). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/style-candy-8.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_fast_style_transfer.jpg"; std::string save_img_path = "../../../examples/logs/test_lite_fast_style_transfer_candy.jpg"; auto *fast_style_transfer = new lite::cv::style::FastStyleTransfer(onnx_path); lite::types::StyleContent style_content; cv::Mat img_bgr = cv::imread(test_img_path); fast_style_transfer->detect(img_bgr, style_content); if (style_content.flag) cv::imwrite(save_img_path, style_content.mat); delete fast_style_transfer; } ``` The output is:

More classes for style transfer (neural style transfer, others) ```c++ auto *transfer = new lite::cv::style::FastStyleTransfer(onnx_path); // 6.4Mb only ``` **** #### Example11: Human Head Segmentation using [HeadSeg](https://github.com/minivision-ai/photo2cartoon). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/minivision_head_seg.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_head_seg.png"; std::string save_img_path = "../../../examples/logs/test_lite_head_seg.jpg"; auto *head_seg = new lite::cv::segmentation::HeadSeg(onnx_path, 4); // 4 threads lite::types::HeadSegContent content; cv::Mat img_bgr = cv::imread(test_img_path); head_seg->detect(img_bgr, content); if (content.flag) cv::imwrite(save_img_path, content.mask * 255.f); delete head_seg; } ``` The output is:
More classes for human segmentation (head, portrait, hair, others) ```c++ auto *segment = new lite::cv::segmentation::HeadSeg(onnx_path); // 31Mb auto *segment = new lite::cv::segmentation::FastPortraitSeg(onnx_path); // <= 400Kb !!! auto *segment = new lite::cv::segmentation::PortraitSegSINet(onnx_path); // <= 380Kb !!! auto *segment = new lite::cv::segmentation::PortraitSegExtremeC3Net(onnx_path); // <= 180Kb !!! Extreme Tiny !!! auto *segment = new lite::cv::segmentation::FaceHairSeg(onnx_path); // 18M auto *segment = new lite::cv::segmentation::HairSeg(onnx_path); // 18M auto *segment = new lite::cv::segmentation::MobileHairSeg(onnx_path); // 14M ``` **** #### Example12: Photo transfer to Cartoon [Photo2Cartoon](https://github.com/minivision-ai/photo2cartoon). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string head_seg_onnx_path = "../../../examples/hub/onnx/cv/minivision_head_seg.onnx"; std::string cartoon_onnx_path = "../../../examples/hub/onnx/cv/minivision_female_photo2cartoon.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_female_photo2cartoon.jpg"; std::string save_mask_path = "../../../examples/logs/test_lite_female_photo2cartoon_seg.jpg"; std::string save_cartoon_path = "../../../examples/logs/test_lite_female_photo2cartoon_cartoon.jpg"; auto *head_seg = new lite::cv::segmentation::HeadSeg(head_seg_onnx_path, 4); // 4 threads auto *female_photo2cartoon = new lite::cv::style::FemalePhoto2Cartoon(cartoon_onnx_path, 4); // 4 threads lite::types::HeadSegContent head_seg_content; cv::Mat img_bgr = cv::imread(test_img_path); head_seg->detect(img_bgr, head_seg_content); if (head_seg_content.flag && !head_seg_content.mask.empty()) { cv::imwrite(save_mask_path, head_seg_content.mask * 255.f); // Female Photo2Cartoon Style Transfer lite::types::FemalePhoto2CartoonContent female_cartoon_content; female_photo2cartoon->detect(img_bgr, head_seg_content.mask, female_cartoon_content); if (female_cartoon_content.flag && !female_cartoon_content.cartoon.empty()) cv::imwrite(save_cartoon_path, female_cartoon_content.cartoon); } delete head_seg; delete female_photo2cartoon; } ``` The output is:
More classes for photo style transfer. ```c++ auto *transfer = new lite::cv::style::FemalePhoto2Cartoon(onnx_path); ``` **** #### Example13: Face Parsing using [FaceParsing](https://github.com/zllrunning/face-parsing.PyTorch). Download model from Model-Zoo[2](#lite.ai.toolkit-2). ```c++ #include "lite/lite.h" static void test_default() { std::string onnx_path = "../../../examples/hub/onnx/cv/face_parsing_512x512.onnx"; std::string test_img_path = "../../../examples/lite/resources/test_lite_face_parsing.png"; std::string save_img_path = "../../../examples/logs/test_lite_face_parsing_bisenet.jpg"; auto *face_parsing_bisenet = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path, 8); // 8 threads lite::types::FaceParsingContent content; cv::Mat img_bgr = cv::imread(test_img_path); face_parsing_bisenet->detect(img_bgr, content); if (content.flag && !content.merge.empty()) cv::imwrite(save_img_path, content.merge); delete face_parsing_bisenet; } ``` The output is:
More classes for face parsing (hair, eyes, nose, mouth, others) ```c++ auto *segment = new lite::cv::segmentation::FaceParsingBiSeNet(onnx_path); // 50Mb auto *segment = new lite::cv::segmentation::FaceParsingBiSeNetDyn(onnx_path); // Dynamic Shape Inference. ```

Citations πŸŽ‰πŸŽ‰

@misc{lite.ai.toolkit@2021,
  title={lite.ai.toolkit: A lite C++ toolkit of awesome AI models.},
  url={https://github.com/DefTruth/lite.ai.toolkit},
  note={Open-source software available at https://github.com/DefTruth/lite.ai.toolkit},
  author={DefTruth, wangzijian1010 etc},
  year={2021}
}