FastMOT is a custom multiple object tracker that implements:
Two-stage trackers like Deep SORT run detection and feature extraction sequentially, which often becomes a bottleneck. FastMOT significantly speeds up the entire system to run in real-time even on Jetson. Motion compensation improves tracking for scenes with moving camera, where Deep SORT and FairMOT fail.
To achieve faster processing, FastMOT only runs the detector and feature extractor every N frames, while KLT fills in the gaps efficiently. FastMOT also re-identifies objects that moved out of frame to keep the same IDs.
YOLOv4 was trained on CrowdHuman (82% mAP@0.5) and SSD's are pretrained COCO models from TensorFlow. Both detection and feature extraction use the TensorRT backend and perform asynchronous inference. In addition, most algorithms, including KLT, Kalman filter, and data association, are optimized using Numba.
Detector Skip | MOTA | IDF1 | HOTA | MOTP | MT | ML |
---|---|---|---|---|---|---|
N = 1 | 66.8% | 56.4% | 45.0% | 79.3% | 912 | 274 |
N = 5 | 65.1% | 57.1% | 44.3% | 77.9% | 860 | 317 |
Sequence | Density | FPS |
---|---|---|
MOT17-13 | 5 - 30 | 42 |
MOT17-04 | 30 - 50 | 26 |
MOT17-03 | 50 - 80 | 18 |
Performance is evaluated with YOLOv4 using TrackEval. Note that neither YOLOv4 nor OSNet was trained or finetuned on the MOT20 dataset, so train set results should generalize well. FPS results are obtained on Jetson Xavier NX (20W 2core mode).
FastMOT has MOTA scores close to state-of-the-art trackers from the MOT Challenge. Increasing N shows small impact on MOTA. Tracking speed can reach up to 42 FPS depending on the number of objects. Lighter models (e.g. YOLOv4-tiny) are recommended for a more constrained device like Jetson Nano. FPS is expected to be in the range of 50 - 150 on desktop CPU/GPU.
Make sure to have nvidia-docker installed. The image requires NVIDIA Driver version >= 450 for Ubuntu 18.04 and >= 465.19.01 for Ubuntu 20.04. Build and run the docker image:
# Add --build-arg TRT_IMAGE_VERSION=21.05 for Ubuntu 20.04
# Add --build-arg CUPY_NVCC_GENERATE_CODE=... to speed up build for your GPU, e.g. "arch=compute_75,code=sm_75"
docker build -t fastmot:latest .
# Run xhost local:root first if you cannot visualize inside the container
docker run --gpus all --rm -it -v $(pwd):/usr/src/app/FastMOT -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=unix$DISPLAY -e TZ=$(cat /etc/timezone) fastmot:latest
Make sure to have JetPack >= 4.4 installed and run the script:
./scripts/install_jetson.sh
Pretrained OSNet, SSD, and my YOLOv4 ONNX model are included.
./scripts/download_models.sh
cd fastmot/plugins
make
Only required for SSD (not supported on Ubuntu 20.04)
./scripts/download_data.sh
python3 app.py --input-uri ... --mot
--input-uri %06d.jpg
--input-uri file.mp4
--input-uri /dev/video0
--input-uri csi://0
--input-uri rtsp://<user>:<password>@<ip>:<port>/<path>
--input-uri http://<user>:<password>@<ip>:<port>/<path>
Use --show
to visualize, --output-uri
to save output, and --txt
for MOT compliant results.
Show help message for all options:
python3 app.py -h
Note that the first run will be slow due to Numba compilation. To use the FFMPEG backend on x86, set WITH_GSTREAMER = False
here
FastMOT can be easily extended to a custom class (e.g. vehicle). You need to train both YOLO and a ReID network on your object class. Check Darknet for training YOLO and fast-reid for training ReID. After training, convert weights to ONNX format. The TensorRT plugin adapted from tensorrt_demos is only compatible with Darknet.
FastMOT also supports multi-class tracking. It is recommended to train a ReID network for each class to extract features separately.
pip3 install onnx==1.4.1
./scripts/yolo2onnx.py --config yolov4.cfg --weights yolov4.weights
fastmot.models.YOLO
like here: https://github.com/GeekAlexis/FastMOT/blob/32c217a7d289f15a3bb0c1820982df947c82a650/fastmot/models/yolo.py#L100-L109
ENGINE_PATH : Path
Path to TensorRT engine.
If not found, TensorRT engine will be converted from the ONNX model
at runtime and cached for later use.
MODEL_PATH : Path
Path to ONNX model.
NUM_CLASSES : int
Total number of trained classes.
LETTERBOX : bool
Keep aspect ratio when resizing.
NEW_COORDS : bool
new_coords Darknet parameter for each yolo layer.
INPUT_SHAPE : tuple
Input size in the format `(channel, height, width)`.
LAYER_FACTORS : List[int]
Scale factors with respect to the input size for each yolo layer.
SCALES : List[float]
scale_x_y Darknet parameter for each yolo layer.
ANCHORS : List[List[int]]
Anchors grouped by each yolo layer.
Note anchors may not follow the same order in the Darknet cfg file. You need to mask out the anchors for each yolo layer using the indices in mask
in Darknet cfg.
Unlike YOLOv4, the anchors are usually in reverse for YOLOv3 and YOLOv3/v4-tiny
fastmot.models.set_label_map
model
in yolo_detector_cfg
to the added Python class name and set class_ids
of interest. You may want to play with conf_thresh
based on model performance
fastmot.models.ReID
like here: https://github.com/GeekAlexis/FastMOT/blob/32c217a7d289f15a3bb0c1820982df947c82a650/fastmot/models/reid.py#L50-L55
ENGINE_PATH : Path
Path to TensorRT engine.
If not found, TensorRT engine will be converted from the ONNX model
at runtime and cached for later use.
MODEL_PATH : Path
Path to ONNX model.
INPUT_SHAPE : tuple
Input size in the format `(channel, height, width)`.
OUTPUT_LAYOUT : int
Feature dimension output by the model.
METRIC : {'euclidean', 'cosine'}
Distance metric used to match features.
Modify cfg/mot.json: set model
in feature_extractor_cfgs
to the added Python class name. For more than one class, add more feature extractor configurations to the list feature_extractor_cfgs
. You may want to play with max_assoc_cost
and max_reid_cost
based on model performance
If you find this repo useful in your project or research, please star and consider citing it:
@software{yukai_yang_2020_4294717,
author = {Yukai Yang},
title = {{FastMOT: High-Performance Multiple Object Tracking Based on Deep SORT and KLT}},
month = nov,
year = 2020,
publisher = {Zenodo},
version = {v1.0.0},
doi = {10.5281/zenodo.4294717},
url = {https://doi.org/10.5281/zenodo.4294717}
}