VLAD-VSA: Cross-Domain Face Presentation Attack Detection with Vocabulary Separation and Adaptation.pdf
These codes are mainly based on the implementation of SSDG, specifically thank Yunpei Jia
python 3.6
pytorch 0.4
torchvision 0.2
Download the OULU-NPU, CASIA-FASD, Idiap Replay-Attack, and MSU-MFSD datasets. Download the pretrained model and put it in the new-created pretrained_model
package
Detect, crop and resize faces to 256x256x3 using MTCNN
To be specific, we process every frame of each video and then utilize the sample_frames
function in the utils/utils.py
to sample frames during training.
Put the processed frames in the path root/data/dataset_name
.
Data Label Generation.
Move to the root/data_label
and generate the data label list:
python generate_label.py
python experiment/.../train_vlad_baseline.py
python experiment/.../train_vlad_baseline2.py
python experiment/.../train_vlad_baseline3.py
The file config.py
contains all the hype-parameters used during training. The parameters can be tuned and better performance may got.
python experiment/.../dg_test.py
Please cite this paper if the code is helpful to your research.
@inproceedings{wang2021vlad,
title={VLAD-VSA: Cross-Domain Face Presentation Attack Detection with Vocabulary Separation and Adaptation},
author={Wang, Jiong and Zhao, Zhou and Jin, Weike and Duan, Xinyu and Lei, Zhen and Huai, Baoxing and Wu, Yiling and He, Xiaofei},
booktitle={Proceedings of the 29th ACM International Conference on Multimedia},
pages={1497--1506},
year={2021}
}