Official code for "LandmarkGait: Intrinsic Human Parsing for Gait Recognition" (ACM MM 2023).
Some challenges when apply human parsing to gait recognition:
We propose LandmarkGait, an unsupervised parsing-based solution that focuses on specific complete body part representations from original binary silhouettes for gait recognition, including ''Silhouette-to-Landmarks'', ''Landmarks-to-Parsing'', and ''Recognition''
clone this repo.
git clone git@github.com:wzb-bupt/LandmarkGait.git
Install dependenices:
Install dependenices by Anaconda:
conda install tqdm pyyaml tensorboard opencv kornia einops six -c conda-forge
conda install pytorch==1.10 torchvision -c pytorch
Or, Install dependenices by pip:
pip install tqdm pyyaml tensorboard opencv-python kornia einops six
pip install torch==1.10 torchvision==0.11
Prepare dataset
To achieve better convergence, we first train LandmarkNet and ParsingNet sequentially to obtain spatio-temporally consistent landmarks and parsing parts. Subsequently, we fix the encoder of LandmarkNet and use pre-trained weights to initiate these two networks and jointly train the subsequent recognition network end-to-end for final gait recognition.
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./configs/landmarkgait/LandmarkGait_Silh_to_Landmark.yaml --phase train --log_to_file
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./configs/landmarkgait/LandmarkGait_Landmark_to_Parsing.yaml --phase train --log_to_file
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./configs/landmarkgait/LandmarkGait_Recognition.yaml --phase train --log_to_file
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 opengait/main.py --cfgs ./configs/landmarkgait/LandmarkGait_Recognition.yaml --phase test --log_to_file
If you find this codebase useful in your research, please consider citing:
@inproceedings{wang2023landmarkgait,
title={LandmarkGait: Intrinsic Human Parsing for Gait Recognition},
author={Wang, Zengbin and Hou, Saihui and Zhang, Man and Liu, Xu and Cao, Chunshui and Huang, Yongzhen and Xu, Shibiao},
booktitle={Proceedings of the 31st ACM International Conference on Multimedia (ACM MM)},
pages={2305--2314},
year={2023}
}
Our code is built upon the great open-source project OpenGait.