Nioolek / PPYOLOE_pytorch

An unofficial implementation of Pytorch version PP-YOLOE,based on Megvii YOLOX training code.
Apache License 2.0
181 stars 20 forks source link

Now, the code can reproduce the mAP result of ppyoloe_s on COCO.

Introduction

An unofficial implementation of Pytorch version PP-YOLOE,based on Megvii YOLOX training code. Many codes references from PP-YOLOE Official implementation and YOLOX. Report on Arxiv

Updates

Comming soon

Model

Model size mAPval
0.5:0.95
mAPtest
0.5:0.95
Speed V100
(ms)
Params
(M)
FLOPs
(G)
weights backbone weights
PP-YOLOE-s 640 43.1% Training... Training... 7.93 17.36 baidu pan code:qfld baidu pan code:mwjy
PP-YOLOE-m 640 Training... Training... Training... 23.43 49.91 baidu pan code:xgji baidu pan code:p4gy
PP-YOLOE-l 640 Training... Training... Training... 52.20 110.07 baidu pan code:1v82 baidu pan code:6kkb
PP-YOLOE-x 640 Training... Training... Training... 98.42 206.59 baidu pan code:liq3 baidu pan code:izas

Quick Start

Unofficial paper interpretation:PPYOLOE深度解析

Installation

Step1. Install from source.(No difference from the Megvii YOLOX)

git clone git@github.com:Nioolek/PPYOLOE_pytorch.git
cd PPYOLOE_pytorch
pip3 install -v -e .  # or  python3 setup.py develop

Demo

Step1. Download a pretrained model from the benchmark table.

Step2. Run demo.py

python tools/demo.py image -f exps/ppyoloe/default/ppyoloe_l.py -c /path/to/your/ppyoloe_l.pth --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu] --ppyoloe --legacy

Because of the difference in preprocess, you must use the args '--ppyoloe' and '--legacy' while inference and eval.

Train

Step1. Prepare COCO dataset

cd <PPYOLOE_pytorch_HOME>
ln -s /path/to/your/COCO ./datasets/COCO

Step2. Reproduce our results on COCO by specifying -f:

python -m yolox.tools.train -f exps/ppyoloe/default/ppyoloe_l.py -d 8 -b 64 --fp16 -o [--cache]

Evaluation

python -m yolox.tools.eval -f  exps/ppyoloe/default/ppyoloe_l.py -c ppyoloe_l.pth -b 64 -d 8 --conf 0.001 --legacy --ppyoloe [--fp16] [--fuse]

Because of the difference in preprocess, you must use the args '--ppyoloe' and '--legacy' while inference and eval.

More details can find from the docs of YOLOX.

(optional) Convert Paddle model to Pytorch

python paddle2torch.py -f 0 -i weights/ppyoloe_crn_x_300e_coco.pdparams -o weights/ppyoloe_x.pth
python paddle2torch.py -f 1 -i weights/CSPResNetb_x_pretrained.pdparams -o weights/CSPResNetb_x_pretrained.pth

-f 0 means convert pretrained detection model.

-f 1 means convert pretrained backbone model.

The download url of Paddle pretrained model can get from PaddleDetection

More information

Reference

https://github.com/Megvii-BaseDetection/YOLOX

 @article{yolox2021,
  title={YOLOX: Exceeding YOLO Series in 2021},
  author={Ge, Zheng and Liu, Songtao and Wang, Feng and Li, Zeming and Sun, Jian},
  journal={arXiv preprint arXiv:2107.08430},
  year={2021}
}

https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe