MegEngine / YOLOX

MegEngine implementation of YOLOX
Apache License 2.0
106 stars 15 forks source link
megengine object-detection yolox

Introduction

YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and industrial communities. For more details, please refer to our report on Arxiv.

This repo is an implementation of MegEngine version YOLOX, there is also a PyTorch implementation.

Updates!!

Comming soon

Benchmark

Light Models.

Model size mAPval
0.5:0.95
Params
(M)
FLOPs
(G)
weights
YOLOX-Tiny 416 32.2 5.06 6.45 github

Standard Models.

Comming soon!

Quick Start

Installation Step1. Install YOLOX. ```shell git clone git@github.com:MegEngine/YOLOX.git cd YOLOX pip3 install -U pip && pip3 install -r requirements.txt pip3 install -v -e . # or python3 setup.py develop ``` Step2. Install [pycocotools](https://github.com/cocodataset/cocoapi). ```shell pip3 install cython; pip3 install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI' ```
Demo Step1. Download a pretrained model from the benchmark table. Step2. Use either -n or -f to specify your detector's config. For example: ```shell python tools/demo.py image -n yolox-tiny -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu] ``` or ```shell python tools/demo.py image -f exps/default/yolox_tiny.py -c /path/to/your/yolox_tiny.pkl --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu] ``` Demo for video: ```shell python tools/demo.py video -n yolox-s -c /path/to/your/yolox_s.pkl --path /path/to/your/video --conf 0.25 --nms 0.45 --tsize 416 --save_result --device [cpu/gpu] ```
Reproduce our results on COCO Step1. Prepare COCO dataset ```shell cd ln -s /path/to/your/COCO ./datasets/COCO ``` Step2. Reproduce our results on COCO by specifying -n: ```shell python tools/train.py -n yolox-tiny -d 8 -b 128 ``` * -d: number of gpu devices * -b: total batch size, the recommended number for -b is num-gpu * 8 When using -f, the above commands are equivalent to: ```shell python tools/train.py -f exps/default/yolox-tiny.py -d 8 -b 128 ```
Evaluation We support batch testing for fast evaluation: ```shell python tools/eval.py -n yolox-tiny -c yolox_tiny.pkl -b 64 -d 8 --conf 0.001 [--fuse] ``` * --fuse: fuse conv and bn * -d: number of GPUs used for evaluation. DEFAULT: All GPUs available will be used. * -b: total batch size across on all GPUs To reproduce speed test, we use the following command: ```shell python tools/eval.py -n yolox-tiny -c yolox_tiny.pkl -b 1 -d 1 --conf 0.001 --fuse ```
Tutorials * [Training on custom data](docs/train_custom_data.md).

MegEngine Deployment

MegEngine in C++

Dump mge file **NOTE**: result model is dumped with `optimize_for_inference` and `enable_fuse_conv_bias_nonlinearity`. ```shell python3 tools/export_mge.py -n yolox-tiny -c yolox_tiny.pkl --dump_path yolox_tiny.mge ```

Benchmark

megengine@tag1.5 +fastrun +weight_preprocess (msec) 1 thread 2 thread 4 thread 8 thread
x86_64(fp32) 516.245 318.29 253.273 222.534
x86_64(fp32+chw88) 362.020 NONE NONE NONE
aarch64(fp32+chw44) 555.877 351.371 242.044 NONE
aarch64(fp16+chw) 439.606 327.356 255.531 NONE
CUDA @ CUDA (msec) 1 batch 2 batch 4 batch 8 batch 16 batch 32 batch 64 batch
megengine(fp32+chw) 8.137 13.2893 23.6633 44.470 86.491 168.95 334.248

Third-party resources

Cite YOLOX

If you use YOLOX in your research, please cite our work by using the following BibTeX entry:

 @article{yolox2021,
  title={YOLOX: Exceeding YOLO Series in 2021},
  author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
  journal={arXiv preprint arXiv:2107.08430},
  year={2021}
}