[οΈβπ₯NewsοΈβπ₯] DCFNet is accepted in JCST. If you find DCFNet useful in your research, please consider citing:
@Article{JCST-2309-13788,
title = {DCFNet: Discriminant Correlation Filters Network for Visual Tracking},
journal = {Journal of Computer Science and Technology},
year = {2023},
issn = {1000-9000(Print) /1860-4749(Online)},
doi = {10.1007/s11390-023-3788-3},
author = {Wei-Ming Hu and Qiang Wang and Jin Gao and Bing Li and Stephen Maybank}
}
This repository contains a Python reimplementation of the DCFNet.
git clone --depth=1 https://github.com/foolwood/DCFNet_pytorch
Requirements for PyTorch 0.4.0 and opencv-python
conda install pytorch torchvision -c pytorch
conda install -c menpo opencv
Training data (VID) and Test dataset (OTB).
cd DCFNet_pytorch/track
ln -s /path/to/your/OTB2015 ./dataset/OTB2015
ln -s ./dataset/OTB2015 ./dataset/OTB2013
cd dataset & python gen_otb2013.py
python DCFNet.py
Download training data. (ILSVRC2015 VID)
./ILSVRC2015
βββ Annotations
βΒ Β βββ VIDβββ a -> ./ILSVRC2015_VID_train_0000
β βββ b -> ./ILSVRC2015_VID_train_0001
β βββ c -> ./ILSVRC2015_VID_train_0002
β βββ d -> ./ILSVRC2015_VID_train_0003
β βββ e -> ./val
β βββ ILSVRC2015_VID_train_0000
β βββ ILSVRC2015_VID_train_0001
β βββ ILSVRC2015_VID_train_0002
β βββ ILSVRC2015_VID_train_0003
β βββ val
βββ Data
βΒ Β βββ VID...........same as Annotations
βββ ImageSets
βββ VID
Prepare training data for dataloader
.
cd DCFNet_pytorch/train/dataset
python parse_vid.py <VID_path> # save all vid info in a single json
python gen_snippet.py # generate snippets
python crop_image.py # crop and generate a json for dataloader
Training. (on multiple GPUs :zap: :zap: :zap: :zap:)
cd DCFNet_pytorch/train/
CUDA_VISIBLE_DEVICES=0,1,2,3 python train_DCFNet.py
After training, you can simple test the model with default parameter.
cd DCFNet_pytorch/track/
python DCFNet --model ../train/work/crop_125_2.0/checkpoint.pth.tar
Search a better hyper-parameter.
CUDA_VISIBLE_DEVICES=0 python tune_otb.py # run on parallel to speed up searching
python eval_otb.py OTB2013 * 0 10000
If you find DCFNet useful in your research, please consider citing:
@article{wang2017dcfnet,
title={DCFNet: Discriminant Correlation Filters Network for Visual Tracking},
author={Wang, Qiang and Gao, Jin and Xing, Junliang and Zhang, Mengdan and Hu, Weiming},
journal={arXiv preprint arXiv:1704.04057},
year={2017}
}