:wave:Welcome everyone to contribute and collaborate on HQTrack repository!
Tracking Anything in High Quality (HQTrack) is a framework for high performance video object tracking and segmentation. It mainly consists of a Video Multi-Object Segmenter (VMOS) and a Mask Refiner (MR), can track multiple target objects at the same time and output accurate object masks.
:beer: HQTrack obtains runner-up in the Visual Object Tracking and Segmentaion (VOTS2023) challenge.
We also provide a demo script, which supports box and point prompts as inputs. This is a pure python script that allows the user to test arbitrary videos.
conda create -n hqtrack python=3.8
conda activate hqtrack
conda install pytorch==1.9 torchvision cudatoolkit=10.2 -c pytorch
cd segment_anything_hq
pip install -e .
pip install opencv-python pycocotools matplotlib onnxruntime onnx
cd packages/Pytorch-Correlation-extension/
python setup.py install
cd HQTrack/networks/encoders/ops_dcnv3
./make.sh
pip install vot-toolkit
pip install easydict
pip install lmdb
pip install einops
pip install jpeg4py
pip install 'protobuf~=3.19.0'
conda install setuptools==58.0.4
pip install timm
pip install tb-nightly
pip install tensorboardx
pip install scikit-image
pip install rsa
pip install six
pip install pillow
Download VMOS model from Google Driver or Baidu Driver and put it under
/path/to/HQTrack/result/default_InternT_MSDeAOTL_V2/YTB_DAV_VIP/ckpt/
Download HQ-SAM_h and put it under
/path/to/HQTrack/segment_anything_hq/pretrained_model/
cd /path/to/VOTS23_workspace
vot initialize tests/multiobject
cp /path/to/our/trackers.ini /path/to/VOTS23_workspace/trackers.ini
bash run.sh
In stage 1, we pre-train VMOS on synthetic video sequences generated from static image datasets. We refer readers to AFB-URR for preparing the pre-train datasets. The Static dataset should be put in
/path/to/HQTrack/datasets/
/path/to/HQTrack/pretrain_models/
python my_tools/transfer_intern_pretrained_model.py
/path/to/HQTrack/configs/pre.py
CUDA_VISIBLE_DEVICES="1" python tools/train.py --amp \
--exp_name "Static_Pre" \
--stage "pre" \
--model "internT_msdeaotl_v2" \
--gpu_num "1"
In stage 2, video multi-object segmentation datasets are employed for training, e.g., DAVIS and YoutubeVOS.
/path/to/HQTrack/datasets/
CUDA_VISIBLE_DEVICES="1" python tools/train.py --amp \
--exp_name "default" \
--stage "ytb_vip_dav_deaot_internT" \
--model "internT_msdeaotl_v2" \
--gpu_num "1"
You can include more training datasets such as VIPSeg, BURST, MOTS, and OVIS for better performance.
If you find HQTrack useful for you, please consider citing :mega:
@misc{hqtrack,
title={Tracking Anything in High Quality},
Author = {Jiawen Zhu and Zhenyu Chen and Zeqi Hao and Shijie Chang and Lu Zhang and Dong Wang and Huchuan Lu and Bin Luo and Jun-Yan He and Jin-Peng Lan and Hanyuan Chen and Chenyang Li},
Title = {Tracking Anything in High Quality},
Year = {2023},
Eprint = {arXiv:2307.13974},
PrimaryClass={cs.CV}
}
This project is based on DeAOT, HQ-SAM, and SAM. Thanks for these excellent works.
If you have any question, feel free to email jiawen@mail.dlut.edu.cn. ^_^