Pose-guided Inter- and Intra-part Relational Transformer for Occluded Person Re-Identification official implement
This repository contains the code for the paper: Pose-guided Inter- and Intra-part Relational Transformer for Occluded Person Re-Identification Zhongxing Ma, Yifan Zhao, Jia Li ACM Conference on Multimedia (ACM MM), 2021
Working directory: /your/path/to/fast-reid/
python -u tools/train_net.py --config-file configs/Pirt.yml --num-gpus 2 OUTPUT_DIR logs/your/customed/path
python -u tools/train_net.py --eval-only --config-file configs/eval.yml --num-gpus 2 OUTPUT_DIR logs/your/customed/path
The config file of the model are placed at ./configs/Pirt.yml
OccludedDuke or Market-1501 datasets shoule be placed at ./datasets/OccludedDuke
See the ./fastreid/data/datasets
folder for detailed configuration
The pose-guided and resnet50 models should be placed at ../models_zoo/
resnet50. resnet50-ibn. pose_hrnet_w48_256x192.
@misc{ma2021poseguided,
title={Pose-guided Inter- and Intra-part Relational Transformer for Occluded Person Re-Identification},
author={Zhongxing Ma and Yifan Zhao and Jia Li},
year={2021},
eprint={2109.03483},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Our code is based on the early version of FAST-REID.
A awesome Repo for beginners to learn, you can find more details of the framework in it.