grimoire / mmdetection-to-tensorrt

convert mmdetection model to tensorrt, support fp16, int8, batch input, dynamic shape etc.
Apache License 2.0
590 stars 85 forks source link
cascade-rcnn faster-rcnn inference mmdetection object-detection retinanet ssd tensorrt yolov3

MMDet to TensorRT

[!NOTE]

The main branch is used to support model conversion of MMDetection>=3.0. If you want to convert model on older MMDetection, Please switch to branch:

News

Introduction

This project aims to support End2End deployment of models in MMDetection with TensorRT.

Mask support is experiment.

Features:

Requirement

Installation

Host

git clone https://github.com/grimoire/mmdetection-to-tensorrt.git
cd mmdetection-to-tensorrt
pip install -e .

Docker

Build docker image

sudo docker build -t mmdet2trt_docker:v1.0 docker/

Run (will show the help for the CLI entrypoint)

sudo docker run --gpus all -it --rm -v ${your_data_path}:${bind_path} mmdet2trt_docker:v1.0

Or if you want to open a terminal inside de container:

sudo docker run --gpus all -it --rm -v ${your_data_path}:${bind_path} --entrypoint bash mmdet2trt_docker:v1.0

Example conversion:

sudo docker run --gpus all -it --rm -v ${your_data_path}:${bind_path} mmdet2trt_docker:v1.0 ${bind_path}/config.py ${bind_path}/checkpoint.pth ${bind_path}/output.trt

Usage

Create a TensorRT model from mmdet model. detail can be found in getting_started.md

CLI

# conversion might take few minutes.
mmdet2trt ${CONFIG_PATH} ${CHECKPOINT_PATH} ${OUTPUT_PATH}

Run mmdet2trt -h for help on optional arguments.

Python

shape_ranges=dict(
    x=dict(
        min=[1,3,320,320],
        opt=[1,3,800,1344],
        max=[1,3,1344,1344],
    )
)
trt_model = mmdet2trt(cfg_path,
                      weight_path,
                      shape_ranges=shape_ranges,
                      fp16_mode=True)

# save converted model
torch.save(trt_model.state_dict(), save_model_path)

# save engine if you want to use it in c++ api
with open(save_engine_path, mode='wb') as f:
    f.write(trt_model.state_dict()['engine'])

[!NOTE]

The input of the engine is the tensor after preprocess. The output of the engine is num_dets, bboxes, scores, class_ids. if you enable the enable_mask flag, there will be another output mask. The bboxes output of the engine did not divided by scale_factor.

how to perform inference with the converted model.

from mmdet.apis import inference_detector
from mmdet2trt.apis import create_wrap_detector

# create wrap detector
trt_detector = create_wrap_detector(trt_model, cfg_path, device_id)

# result share same format as mmdetection
result = inference_detector(trt_detector, image_path)

Try demo in demo/inference.py, or demo/cpp if you want to do inference with c++ api.

Read getting_started.md for more details.

How does it works?

Most other project use pytorch=>ONNX=>tensorRT route, This repo convert pytorch=>tensorRT directly, avoid unnecessary ONNX IR. Read how-does-it-work for detail.

Support Model/Module

[!NOTE]

Some models have only been tested on MMDet<3.0. If you found any failed model, Please report in the issue.

Tested on:

FAQ

read this page if you meet any problem.

License

This project is released under the Apache 2.0 license.