Split the yolov5 model to {backbone, neck, head} to facilitate the operation of various modules and support more backbones.Basically, only change the model, and I didn't change the architecture, training and testing of yolov5. Therefore, if the original code is updated, it is also very convenient to update this code. if you have some new ideas, you can give a pull request, add new features together。 if this repo can help you, please give me a star.
None
please refer requirements.txt
Make data for yolov5 format. you can use od/data/transform_voc.py convert VOC data to yolov5 data format.
For training and Testing, it's same like yolov5.
$ python scripts/train.py --batch 16 --epochs 5 --data configs/data.yaml --cfg configs/model_XXX.yaml
# for nvidia tensor-core 4:2 sparsity, install apex
git clone https://github.com/NVIDIA/apex
cd apex
# if pip >= 23.1 (ref: https://pip.pypa.io/en/stable/news/#v23-1) which supports multiple `--config-settings` with the same key...
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" ./
# otherwise
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --global-option="--cpp_ext" --global-option="--cuda_ext" ./
A google colab demo in train_demo.ipynb
$ python scripts/eval.py --data configs/data.yaml --weights runs/train/yolo/weights/best.py
For some reasons, I can't provide the pretrained weight, only the comparison results. Sorry!
All checkpoints are trained to 300 epochs with default settings, all backbones without pretrained weights. Yolov5 Nano and Small models use hyp.scratch-low.yaml hyps, all others use hyp.scratch-high.yaml. The mAP of the validation come to the last epoch, maybe not the best.
flexible-yolov5 model with different backbones | size (pixels) |
mAPval 0.5:0.95 |
mAPval 0.5 |
params |
---|---|---|---|---|
[flexible-YOLOv5n](https://pan.baidu.com/s/1UAvEmgWmpxA3oPm5CJ8C-g 提取码: kg22) | 640 | 25.7 | 43.3 | 1872157 |
[flexible-YOLOv5s](https://pan.baidu.com/s/1ImN2ryMK3IPy8_St-Rzxhw 提取码: pt8i) | 640 | 35 | 54.7 | 7235389 |
[flexible-YOLOv5m] | 640 | 42.1 | 62 | 21190557 |
[flexible-YOLOv5l] | 640 | 45.3 | 65.3 | 46563709 |
[flexible-YOLOv5x] | 640 | 47 | 66.7 | 86749405 |
[mobilnet-v3-small] | 640 | 21.9 | 37.6 | 3185757 |
[resnet-18] | 640 | 34.6 | 53.7 | 14240445 |
[shufflenetv2-x1_0] | 640 | 27.8 | 45.1 | 4297569 |
[repvgg-A0] | 640 | |||
[vgg-16bn] | 640 | 35.2 | 56.4 | 17868989 |
[efficientnet-b1] | 640 | 38.1 | 58.6 | 9725597 |
[swin-tiny] | 640 | 39.2 | 60.5 | 30691127 |
[gcn-tiny] | 640 | 33.8 | 55.5 | 131474444 |
[resnet-18-cbam] | 640 | 35.2 | 55.5 | 15620399 |
[resnet-18-dcn] | 640 |
python scripts/detector.py --weights yolov5.pth --imgs_root test_imgs --save_dir ./results --img_size 640 --conf_thresh 0.4 --iou_thresh 0.4
python scripts/export.py --weights yolov5.pth
In projects folder, tf_serving and triton demo are provided.
You can directly quantify the onnx model
python scripts/trt_quant/generate_int8_engine.py --onnx path --images-dir img_path --save-engine engine_path
For tensorrt model, you can direct use official trt export, and refer scripts/trt_infer/cpp/. For test, I use TensorRT-8.4.0.6.
privode c++ / python demo, scripts/trt_infer