Face Normalization Model
A PyTorch implementation of Unsupervised Face Normalization with Extreme Pose and Expression in the Wild from the paper by Qian, Yichen and Deng, Weihong and Hu, Jiani.
Here are some examples made by fnm.pytorch.
Pre-requisites
--
- python3
- CUDA 9.0 or higher
- Install Pytorch following the website. (or install w/ pip install torch torchvision)
- numpy
- pillow
- matplotlib
- tensorboardX
- pandas
- scipy
Datasets
- Download face dataset such as CAISA-WebFace, VGGFace2, and MS-Celeb-1M as source set, and you can use any constrained (in-the-house) dataset as normal set.
- All face images are normalized to 250x250 according to landmarks. According to the five facial points, please follow the align protocol in LightCNN. I also provide the crop code (MTCNN) which as shown below.
Training and Inference
- Colone the Repository to preserve Directory Strcuture.
- Download the face expert model, and put the model in /Pretrained/VGGFace2/ directory.
-
Change the directory to /FaceAlignment/ (cd FaceAlignment), and crop and align the input face images by running:
python face_align.py
-
Train the face normalization model by running:
python main.py -front-list {} -profile-list {}
-
I also provide a simple test code, which can help to generate the normalized face and extract the features:
python main.py -generate -gen-list {} -snapshot {your trained model}
Note that, you need to define the csv files of source/normal/generate data roots during training/testing.
To-do list
- [x] Released the training code.
- [x] Released the evaluation code.