open-mmlab / mmdeploy

OpenMMLab Model Deployment Framework
https://mmdeploy.readthedocs.io/en/latest/
Apache License 2.0
2.78k stars 637 forks source link
computer-vision deep-learning deployment mmdetection mmsegmentation model-converter ncnn onnx onnxruntime openvino pplnn pytorch sdk tensorrt
 
OpenMMLab website HOT      OpenMMLab platform TRY IT OUT
 
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmdeploy.readthedocs.io/en/latest/) [![badge](https://github.com/open-mmlab/mmdeploy/workflows/build/badge.svg)](https://github.com/open-mmlab/mmdeploy/actions) [![codecov](https://codecov.io/gh/open-mmlab/mmdeploy/branch/main/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmdeploy) [![license](https://img.shields.io/github/license/open-mmlab/mmdeploy.svg)](https://github.com/open-mmlab/mmdeploy/tree/main/LICENSE) [![issue resolution](https://img.shields.io/github/issues-closed-raw/open-mmlab/mmdeploy)](https://github.com/open-mmlab/mmdeploy/issues) [![open issues](https://img.shields.io/github/issues-raw/open-mmlab/mmdeploy)](https://github.com/open-mmlab/mmdeploy/issues) English | [简体中文](README_zh-CN.md)

Highlights

The MMDeploy 1.x has been released, which is adapted to upstream codebases from OpenMMLab 2.0. Please align the version when using it. The default branch has been switched to main from master. MMDeploy 0.x (master) will be deprecated and new features will only be added to MMDeploy 1.x (main) in future.

mmdeploy mmengine mmcv mmdet others
0.x.y - \<=1.x.y \<=2.x.y 0.x.y
1.x.y 0.x.y 2.x.y 3.x.y 1.x.y

deploee offers over 2,300 AI models in ONNX, NCNN, TRT and OpenVINO formats. Featuring a built-in list of real hardware devices, deploee enables users to convert Torch models into any target inference format for profiling purposes.

Introduction

MMDeploy is an open-source deep learning model deployment toolset. It is a part of the OpenMMLab project.

Main features

Fully support OpenMMLab models

The currently supported codebases and models are as follows, and more will be included in the future

Multiple inference backends are available

The supported Device-Platform-InferenceBackend matrix is presented as following, and more will be compatible.

The benchmark can be found from here

Device /
Platform
Linux Windows macOS Android
x86_64
CPU
onnxruntime
pplnn
ncnn
LibTorch
OpenVINO
TVM
onnxruntime
OpenVINO
ncnn
- -
ARM
CPU
ncnn
- - ncnn
RISC-V ncnn
- - -
NVIDIA
GPU
onnxruntime
TensorRT
LibTorch
pplnn
onnxruntime
TensorRT
- -
NVIDIA
Jetson
TensorRT
- - -
Huawei
ascend310
CANN
- - -
Rockchip RKNN
- - -
Apple M1 - - CoreML
-
Adreno
GPU
- - - SNPE
ncnn
Hexagon
DSP
- - - SNPE

Efficient and scalable C/C++ SDK Framework

All kinds of modules in the SDK can be extended, such as Transform for image processing, Net for Neural Network inference, Module for postprocessing and so on

Documentation

Please read getting_started for the basic usage of MMDeploy. We also provide tutoials about:

Benchmark and Model zoo

You can find the supported models from here and their performance in the benchmark.

Contributing

We appreciate all contributions to MMDeploy. Please refer to CONTRIBUTING.md for the contributing guideline.

Acknowledgement

We would like to sincerely thank the following teams for their contributions to MMDeploy:

Citation

If you find this project useful in your research, please consider citing:

@misc{=mmdeploy,
    title={OpenMMLab's Model Deployment Toolbox.},
    author={MMDeploy Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmdeploy}},
    year={2021}
}

License

This project is released under the Apache 2.0 license.

Projects in OpenMMLab