This repository will give the training and evaluation code for the interpretable face recognition. For details, you can check the associated paper Towards Interpretable Face recognition.
python main_pretrain_CASIA.py --dataset CASIA --batch_size 64 --is_train True --learning_rate 0.001 --image_size 96 --is_with_y True --gf_dim 32 --df_dim 32 --dfc_dim 320 --gfc_dim 320 --z_dim 20 --checkpoint_dir ./Interp_FR --gpu 0
.Evaluate the face recognition performance:
python test_pre_train.py
.If you want to reproduce the results on natural and synthetic occluded faces, there are two ways: create your own synthetic occlusions and filter all the natrual occluded faces from IJB-A/IJB-C.
During training, we randomly generate black window for each face image. Therefore, you can perform the same way on IJB-A benchmark to get your own synthetic testing faces. In test\eval directory, you can run the line: python gen_syn_occl.py
and then a folder IJB-A_occl will be generated, which contains all the IJB-A synthetic occluded faces.
For natural occlusion, both IJB-A and IJB-C have provided protocols, where we can derive the occlusion annotation information. In test\eval directory, we provided 2 .m, CNN_single_verify_subset.m and CNN_single_search_subset.m. After you obtained the .txt for IJB-A features, you can run these two matlab scripts to evaluate the performance. Besides, the evaluation protocol of IJB-C is different from IJB-A. After you use the test\eval\process.m to preprocess the IJB-C images, you will get the occluded faces list file, IJBC_occluded_faces_path.txt. Then, accroding to this txt file, you can use the model to generate the IJB-C features. By running test\eval\verify.m and test\eval\search.m, you can have the natural occluded face recognition performance on IJB-C.
Another interesting natural occlusion benchmark is AR database, we select all the face images with heavy occlusions, like sunglasses and scarf. Totally there are 810 images and the image list you can obtain also in test\eval directory, ar_occl_list.txt. In the paper, we randomly construct the same and different identity pairs to get the EER. You may repeat this process 10 or more times and then take the average number.
Visualize the average locations of peak response:
python test_pre_train.py
, you should comment the line extract_features_for_eval()
. In this repository, we only frozen ours model, for base CNN and spatial only models, you can use the provided freeze_my_model.py to freeze the required models.If you use our model or code in your research, please cite the paper:
@inproceedings{InterpretFR2018,
title={Towards Interpretable Face recognition},
author={Bangjie Yin and Luan Tran and Haoxiang Li and Xiaohui Shen and Xiaoming Liu},
booktitle={arXiv preprint arXiv:1805.00611},
year={2018}
}