open-mmlab / mmtracking

OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
https://mmtracking.readthedocs.io/en/latest/
Apache License 2.0
3.58k stars 597 forks source link
multi-object-tracking single-object-tracking tracking video-instance-segmentation video-object-detection
 
OpenMMLab website HOT      OpenMMLab platform TRY IT OUT
 
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/mmtrack)](https://pypi.org/project/mmtrack/) [![PyPI](https://img.shields.io/pypi/v/mmtrack)](https://pypi.org/project/mmtrack) [![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmtracking.readthedocs.io/en/latest/) [![badge](https://github.com/open-mmlab/mmtracking/workflows/build/badge.svg)](https://github.com/open-mmlab/mmtracking/actions) [![codecov](https://codecov.io/gh/open-mmlab/mmtracking/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmtracking) [![license](https://img.shields.io/github/license/open-mmlab/mmtracking.svg)](https://github.com/open-mmlab/mmtracking/blob/master/LICENSE) [📘Documentation](https://mmtracking.readthedocs.io/) | [🛠️Installation](https://mmtracking.readthedocs.io/en/latest/install.html) | [👀Model Zoo](https://mmtracking.readthedocs.io/en/latest/model_zoo.html) | [🆕Update News](https://mmtracking.readthedocs.io/en/latest/changelog.html) | [🤔Reporting Issues](https://github.com/open-mmlab/mmtracking/issues/new/choose)
English | [简体中文](README_zh-CN.md)

Introduction

MMTracking is an open source video perception toolbox by PyTorch. It is a part of OpenMMLab project.

The master branch works with PyTorch1.5+.

Major features

What's New

We release MMTracking 1.0.0rc0, the first version of MMTracking 1.x.

Built upon the new training engine, MMTracking 1.x unifies the interfaces of datasets, models, evaluation, and visualization.

We also support more methods in MMTracking 1.x, such as StrongSORT for MOT, Mask2Former for VIS, PrDiMP for SOT.

Please refer to dev-1.x branch for the using of MMTracking 1.x.

Installation

Please refer to install.md for install instructions.

Getting Started

Please see dataset.md and quick_run.md for the basic usage of MMTracking.

A Colab tutorial is provided. You may preview the notebook here or directly run it on Colab.

There are also usage tutorials, such as learning about configs, an example about detailed description of vid config, an example about detailed description of mot config, an example about detailed description of sot config, customizing dataset, customizing data pipeline, customizing vid model, customizing mot model, customizing sot model, customizing runtime settings and useful tools.

Benchmark and model zoo

Results and models are available in the model zoo.

Video Object Detection

Supported Methods

Supported Datasets

Single Object Tracking

Supported Methods

Supported Datasets

Multi-Object Tracking

Supported Methods

Supported Datasets

Video Instance Segmentation

Supported Methods

Supported Datasets

Contributing

We appreciate all contributions to improve MMTracking. Please refer to CONTRIBUTING.md for the contributing guideline and this discussion for development roadmap.

Acknowledgement

MMTracking is an open source project that welcome any contribution and feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible as well as standardized toolkit to reimplement existing methods and develop their own new video perception methods.

Citation

If you find this project useful in your research, please consider cite:

@misc{mmtrack2020,
    title={{MMTracking: OpenMMLab} video perception toolbox and benchmark},
    author={MMTracking Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmtracking}},
    year={2020}
}

License

This project is released under the Apache 2.0 license.

Projects in OpenMMLab