mangye16 / DDAG

Pytorch Code of DDAG for Visible-Infrared Person Re-Identification (ECCV20)
MIT License
71 stars 16 forks source link

DDAG

Pytorch Code of DDAG for Visible-Infrared Person Re-Identification in ECCV 2020. PDF

A Huawei MindSpore implementation of our DDAG method is HERE. Thanks to Zhiwei Zhang zhangzw12319@163.com.

Highlight

The goal of this work is to learn a robust and discriminative cross-modality representation for visible-infrarerd person re-identification.

Results on the SYSU-MM01 Dataset

Method Datasets Rank@1 mAP mINP
AGW [1] #SYSU-MM01 (All-Search) ~ 47.50% ~ 47.65% ~ 35.30%
DDAG #SYSU-MM01 (All-Search) ~ 54.75% ~ 53.02% ~39.62%
AGW [1] #SYSU-MM01 (Indoor-Search) ~ 54.17% ~ 62.97% ~ 59.23%
DDAG #SYSU-MM01 (Indoor-Search) ~ 61.02% ~ 67.98% ~ 62.61%

*The code has been tested in Python 3.7, PyTorch=1.0. Both of these two datasets may have some fluctuation due to random spliting

1. Prepare the datasets.

2. Training.

Train a model by

python train_ddag.py --dataset sysu --lr 0.1 --graph --wpa --part 3 --gpu 0

You may need manually define the data path first.

3. Testing.

Test a model on SYSU-MM01 or RegDB dataset by

python test_ddag.py --dataset sysu --mode all --wpa --graph --gpu 1 --resume 'model_path' 

4. Citation

Please kindly cite the references in your publications if it helps your research:

@inproceedings{eccv20ddag,
  title={Dynamic Dual-Attentive Aggregation Learning for Visible-Infrared Person Re-Identification},
  author={Ye, Mang and Shen, Jianbing and Crandall, David J. and Shao, Ling and Luo, Jiebo},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2020},
}
@article{arxiv20reidsurvey,
  title={Deep Learning for Person Re-identification: A Survey and Outlook},
  author={Ye, Mang and Shen, Jianbing and Lin, Gaojie and Xiang, Tao and Shao, Ling and Hoi, Steven C. H.},
  journal={arXiv preprint arXiv:2001.04193},
  year={2020},
}

Contact: mangye16@gmail.com