PyTorch evaluation code and pretrained models for SLPT (Sparse Local Patch Transformer).
Install python dependencies:
pip3 install -r requirements.txt
Download and process WFLW dataset
./dataset
directory. Your directory should look like this:
SLPT
└───Dataset
│
└───WFLW
│
└───WFLW_annotations
│ └───list_98pt_rect_attr_train_test
│ │
│ └───list_98pt_test
└───WFLW_images
└───0--Parade
│
└───...
Download pretrained model from Google Drive.
Model Name | NME (%) | FR0.1 (%) | AUC0.1 | download link | |
1 | SLPT-6-layers | 4.143 | 2.760 | 0.595 | download |
2 | SLPT-12-layers | 4.128 | 2.720 | 0.596 | download |
Put the model in ./Weight
directory.
Test
python test.py --checkpoint=<model_name>
For example: python test.py --checkpoint=WFLW_6_layer.pth
Note: if you want to use the model with 12 layers, you need to change _C.TRANSFORMER.NUM_DECODER
for
6 to 12 in ./Config/default.py
.
We also provide a video demo script.
yunet_final.pth
to ./Weight/Face_Detector/
python Camera.py --video_source=<Video Path>
If you find this work or code is helpful in your research, please cite:
@inproceedings{SLPT,
title={Sparse Local Patch Transformer for Robust Face Alignment and Landmarks},
author={Jiahao Xia and Weiwei Qu and Wenjian Huang and Jianguo Zhang and Xi Wang and Min Xu},
booktitle={CVPR},
year={2022}
}
SLPT is released under the GPL-2.0 license. Please see the LICENSE file for more information.