face3d0725 / FaceExtraction

MIT License
36 stars 7 forks source link

FaceExtraction

FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face Extraction

Our paper is accepted by TAIMA 2022

Occlusions often occur in face images in the wild, troubling face-related tasks such as landmark detection, 3D reconstruction, and face recognition. It is beneficial to extract face regions from unconstrained face images accurately. However, current face segmentation datasets suffer from small data volumes, few occlusion types, low resolution, and imprecise annotation, limiting the performance of data-driven-based algorithms. This paper proposes a novel face occlusion dataset with manually labeled face occlusions from the CelebA-HQ and the internet. The occlusion types cover sunglasses, spectacles, hands, masks, scarfs, microphones, etc. To the best of our knowledge, it is by far the largest and most comprehensive face occlusion dataset. Combining it with the attribute mask in CelebAMask-HQ, we trained a straightforward face segmentation model but obtained SOTA performance, convincingly demonstrating the effectiveness of the proposed dataset.

Requirements

How to use

  1. Download CelebAMask-HQ dataset, detect the facial landmarks using 3DDFAv2
  2. Specify the directories in face_align/process_CelebAMaskHQ.py
  3. Run face_align/process_CelebAMaskHQ.py to generate&align CelebAMask-HQ images and masks 4.Download FaceOcc and put it under Dataset directory 5.Run train.py

Dataset

FaceOcc

Pretrained Model

Pretrained Model

Results

Face masks are shown in blue. From top to bottom are input images, predicted masks, and the ground truth: From top to the bottom: input images, predicted masks, ground truth

Related Works

License

This dataset as well as the pretrained face extraction model is licensed under the MIT License. You are free to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the data, as well as to permit persons to whom the data is furnished to do so, subject to the following conditions:

For more details about the MIT License, please see the full text.

Citation

If you use our dataset, please cite our following works:

Xiangnan YIN, Liming Chen, “FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face Extraction”, Traitement et Analyse de l’Information Méthodes et Applications (TAIMA’2022), 28 May-02 June 2022, Hammamet, Tunisia, ArXiv : 2201.08425. HAL : hal-03540753.

Xiangnan YIN, Di Huang, Zehua Fu, Yunhong Wang, Liming Chen, Segmentation-Reconstruction-Guided Facial Image De-occlusion, 17th IEEE Intl. Conference on Automatic Face and Gesture Recognition 2023 (FG’2023), January 5-8, 2023, Hawaiii, USA. Find the video presentation here.

Xiangnan YIN, Di Huang, Zehua Fu, Yunhong Wang, Liming Chen, Weakly Supervised Photo-Realistic Texture Generation for 3D Face Reconstruction, 17th IEEE Intl. Conference on Automatic Face and Gesture Recognition 2023 (FG’2023), January 5-8, 2023, Hawaiii, USA. Find the video presentation here.

Xiangnan Yin, Di Huang, Liming Chen, “Non-Deterministic Face Mask Removal Based on 3D Priors”, 2022 IEEE International Conference on Image Processing (ICIP), Bordeaux, France, 16-19 October 2022. Find the video presentation here.