Media-Smart / vedadet

A single stage object detection toolbox based on PyTorch
Apache License 2.0
498 stars 128 forks source link
object-detection pytorch single-stage toolbox

Introduction

vedadet is a single stage object detector toolbox based on PyTorch.

Features

License

This project is released under the Apache 2.0 license.

Installation

Requirements

We have tested the following versions of OS and softwares:

Install vedadet

a. Create a conda virtual environment and activate it.

conda create -n vedadet python=3.8.5 -y
conda activate vedadet

b. Install PyTorch and torchvision following the official instructions, e.g.,

conda install pytorch torchvision -c pytorch

c. Clone the vedadet repository.

git clone https://github.com/Media-Smart/vedadet.git
cd vedadet
vedadet_root=${PWD}

d. Install vedadet.

pip install -r requirements/build.txt
pip install -v -e .

Train

a. Config

Modify some configuration accordingly in the config file like configs/trainval/retinanet/retinanet.py

b. Multi-GPUs training

tools/dist_trainval.sh configs/trainval/retinanet/retinanet.py "0,1"

c. Single GPU training

CUDA_VISIBLE_DEVICES="0" python tools/trainval.py configs/trainval/retinanet/retinanet.py

Test

a. Config

Modify some configuration accordingly in the config file like configs/trainval/retinanet/retinanet.py

b. Test

CUDA_VISIBLE_DEVICES="0" python tools/test.py configs/trainval/retinanet/retinanet.py weight_path

Inference

a. Config

Modify some configuration accordingly in the config file like configs/infer/retinanet/retinanet.py

b. Inference

CUDA_VISIBLE_DEVICES="0" python tools/infer.py configs/infer/retinanet/retinanet.py image_path

Deploy

a. Convert to Onnx

Firstly, install volksdep following the official instructions.

Then, run the following code to convert PyTorch to Onnx. The input shape format is CxHxW. If you need the onnx model with constant input shape, please remove --dynamic_shape in the end.

CUDA_VISIBLE_DEVICES="0" python tools/torch2onnx.py configs/trainval/retinanet/retinanet.py weight_path out_path --dummy_input_shape 3,800,1344 --dynamic_shape

Here are some unsupported operations for model conversion.

Please see more details in this.

b. Inference SDK

Firstly, install flexinfer following the official instructions.

Then, see the example for details.

Contact

This repository is currently maintained by Yanjia Zhu (@mike112223), Hongxiang Cai (@hxcai), Yichao Xiong (@mileistone).

Credits

We got a lot of code from mmcv and mmdetection, thanks to open-mmlab.