argusswift / YOLOv4-pytorch

This is a pytorch repository of YOLOv4, attentive YOLOv4 and mobilenet YOLOv4 with PASCAL VOC and COCO
1.68k stars 331 forks source link
attention cbam mobilenetv2 mobilenetv3 object-detection pytorch senet yolov4

996.icu LICENSE

YOLOv4-pytorch (attentive YOLOv4 and Mobilenetv3 YOLOv4)

tips:深度学习指导,目标检测、目标跟踪、语义分割等,小型数据集详询QQ3419923783

Results(updating)

name train Dataset test Dataset test size mAP inference time(ms) params(M) model link
mobilenetv2-YOLOV4 VOC trainval(07+12) VOC test(07) 416 0.851 11.29 46.34 args

Update!!!

Mobilenetv3-YOLOv4 is arriving!(You only need to change the MODEL_TYPE in config/yolov4_config.py)

News!!!

This repo add some useful attention methods in backbone.The following pictures illustrate such thing:

SEnet

CBAM

Highlights

YOLOv4 (attentive YOLOv4 and Mobilenet-YOLOv4) with some useful module

This repo is simple to use,easy to read and uncomplicated to improve compared with others!!!

Environment


Brief


Install dependencies

Run the installation script to install all the dependencies. You need to provide the conda install path (e.g. ~/anaconda3) and the name for the created conda environment (here YOLOv4-pytorch).

pip3 install -r requirements.txt --user

Note: The install script has been tested on an Ubuntu 18.04 and Window 10 system. In case of issues, check the detailed installation instructions.

Prepared work

1、Git clone YOLOv4 repository

git clone github.com/argusswift/YOLOv4-pytorch.git

Update the "PROJECT_PATH" in the config/yolov4_config.py.


2、Prepared dataset

PascalVOC

  # Download the data.
  cd $HOME/data
  wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
  wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
  wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
  # Extract the data.
  tar -xvf VOCtrainval_11-May-2012.tar
  tar -xvf VOCtrainval_06-Nov-2007.tar
  tar -xvf VOCtest_06-Nov-2007.tar

And then

3、Download weight file

4、Transfer to your own dataset(train your own dataset)

Run the following command to start training and see the details in the config/yolov4_config.py and you should set DATA_TYPE is VOC or COCO when you run training program.

CUDA_VISIBLE_DEVICES=0 nohup python -u train.py  --weight_path weight/yolov4.weights --gpu_id 0 > nohup.log 2>&1 &

Also * It supports to resume training adding --resume, it will load last.pt automaticly by using commad

CUDA_VISIBLE_DEVICES=0 nohup python -u train.py  --weight_path weight/last.pt --gpu_id 0 > nohup.log 2>&1 &

To detect

Modify your detecte img path:DATA_TEST=/path/to/your/test_data # your own images

for VOC dataset:
CUDA_VISIBLE_DEVICES=0 python3 eval_voc.py --weight_path weight/best.pt --gpu_id 0 --visiual $DATA_TEST --eval --mode det
for COCO dataset:
CUDA_VISIBLE_DEVICES=0 python3 eval_coco.py --weight_path weight/best.pt --gpu_id 0 --visiual $DATA_TEST --eval --mode det

The images can be seen in the output/. you could see pictures like follows:

det-result


To test video

Modify:

results

If you want to see the picture above, you should use follow commands:

# To get ground truths of your dataset
python3 utils/get_gt_txt.py
# To plot P-R curve and calculate mean average precision
python3 utils/get_map.py 

To evaluate (COCO)

Modify your evaluate dataset path:DATA_PATH=/path/to/your/test_data # your own images

CUDA_VISIBLE_DEVICES=0 python3 eval_coco.py --weight_path weight/best.pt --gpu_id 0 --visiual $DATA_TEST --eval --mode val

type=bbox
Running per image evaluation...      DONE (t=0.34s).
Accumulating evaluation results...   DONE (t=0.08s).
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.438
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.607
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.469
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.253
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.486
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.567
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.342
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.571
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.632
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.458
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.691
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.790

To evaluate your model parameters

python3 utils/modelsize.py

To visualize heatmaps

Set showatt=Ture in val_voc.py and you will see the heatmaps emerged from network' output

for VOC dataset:
CUDA_VISIBLE_DEVICES=0 python3 eval_voc.py --weight_path weight/best.pt --gpu_id 0 --visiual $DATA_TEST --eval
for COCO dataset:
CUDA_VISIBLE_DEVICES=0 python3 eval_coco.py --weight_path weight/best.pt --gpu_id 0 --visiual $DATA_TEST --eval

The heatmaps can be seen in the output/ like this:

heatmaps

Reference