JackWoo0831 / Yolov7-tracker

Yolo X, v7, v8 and several Multi-Object Tracker(SORT, DeepSORT, ByteTrack, BoT-SORT, etc.) in MOT17 and VisDrone2019 Dataset. It uses a unified style and integrated tracker for easy embedding in your own projects.
GNU General Public License v3.0
636 stars 88 forks source link
pytorch

YOLO detector and SOTA Multi-object tracker Toolbox

❗❗Important Notes

Compared to the previous version, this is an entirely new version (branch v2)!!!

Please use this version directly, as I have almost rewritten all the code to ensure better readability and improved results, as well as to correct some errors in the past code.

git clone https://github.com/JackWoo0831/Yolov7-tracker.git
git checkout v2  # change to v2 branch !!

🙌 If you have any suggestions for adding trackers, please leave a comment in the Issues section with the paper title or link! Everyone is welcome to contribute to making this repo better.

**Language**: English | [简体中文](README_CN.md)

🗺️ Latest News

❤️ Introduction

This repo is a toolbox that implements the tracking-by-detection paradigm multi-object tracker. The detector supports:

and the tracker supports:

and the reid model supports:

The highlights are:

gif

🔨 Installation

The basic env is:

Run following commond to install other packages:

pip3 install -r requirements.txt

🔍 Detector installation

  1. YOLOX:

The version of YOLOX is 0.1.0 (same as ByteTrack). To install it, you can clone the ByteTrack repo somewhere, and run:

https://github.com/ifzhang/ByteTrack.git

python3 setup.py develop
  1. YOLO v7:

There is no need to execute addtional steps as the repo itself is based on YOLOv7.

  1. YOLO v8:

Please run:

pip3 install ultralytics==8.0.94

📑 Data preparation

If you do not want to test on the specific dataset, instead, you only want to run demos, please skip this section.

No matter what dataset you want to test, please organize it in the following way (YOLO style):

dataset_name
     |---images
           |---train
                 |---sequence_name1
                             |---000001.jpg
                             |---000002.jpg ...
           |---val ...
           |---test ...

     |

You can refer to the codes in ./tools to see how to organize the datasets.

Then, you need to prepare a yaml file to indicate the path so that the code can find the images.

Some examples are in tracker/config_files. The important keys are:

DATASET_ROOT: '/data/xxxx/datasets/MOT17'  # your dataset root
SPLIT: test  # train, test or val
CATEGORY_NAMES:  # same in YOLO training
  - 'pedestrian'

CATEGORY_DICT:
  0: 'pedestrian'

🚗 Practice

🏃 Training

Trackers generally do not require parameters to be trained. Please refer to the training methods of different detectors to train YOLOs.

Some references may help you:

python train_aux.py --dataset visdrone --workers 8 --device <$GPU_id$> --batch-size 16 --data data/visdrone_all.yaml --img 1280 1280 --cfg cfg/training/yolov7-w6.yaml --weights <$YOLO v7 pretrained model path$> --name yolov7-w6-custom --hyp data/hyp.scratch.custom.yaml

😊 Tracking !

If you only want to run a demo:

python tracker/track_demo.py --obj ${video path or images folder path} --detector ${yolox, yolov8 or yolov7} --tracker ${tracker name} --kalman_format ${kalman format, sort, byte, ...} --detector_model_path ${detector weight path} --save_images

For example:

python tracker/track_demo.py --obj M0203.mp4 --detector yolov8 --tracker deepsort --kalman_format byte --detector_model_path weights/yolov8l_UAVDT_60epochs_20230509.pt --save_images

If you want to run trackers on dataset:

python tracker/track.py --dataset ${dataset name, related with the yaml file} --detector ${yolox, yolov8 or yolov7} --tracker ${tracker name} --kalman_format ${kalman format, sort, byte, ...} --detector_model_path ${detector weight path}

For example:

Important notes for UCMC Track:

  1. Camera parameters. The UCMC Track need the intrinsic and extrinsic parameter of camera. Please organize like the format of tracker/cam_param_files/uavdt/M0101.txt. One video sequence corresponds to one txt file. If you do not have the labelled parameters, you can refer to the estimating toolbox in original repo (https://github.com/corfyi/UCMCTrack).

  2. The code does not contain the camera motion compensation part between every two frame, please refer to https://github.com/corfyi/UCMCTrack/issues/12. From my perspective, since the algorithm name is 'uniform', the update of compensation between every two frames is not necessary.

✅ Evaluation

Coming Soon. As an alternative, after obtaining the result txt file, you can use the Easier to use TrackEval repo.