Existing facial databases cover large variations including: different subjects, poses, illumination, occlusions etc. However, the provided annotations appear to have several limitations.
Figure 1: (a)-(d) Annotated images from MultiPIE, XM2VTS, AR, FRGC Ver.2 databases, and (e) examples from XM2VTS with inaccurate annotations.
The majority of existing databases provide annotations for a relatively small subset of the overall images.
The accuracy of provided annotations in some cases is not so good (probably due to human fatigue).
The annotation model of each database consists of different number of landmarks.
These problems make cross-database experiments and comparisons between different methods almost infeasible. To overcome these difficulties, we propose a semi-automatic annotation methodology for annotating massive face datasets. This is the first attempt to create a tool suitable for annotating massive facial databases.
All the annotations are provided for research purposes ONLY (NO commercial products).
Figure 2: The 68 points mark-up used for our annotations.
Download:
We employed our tool for creating annotations (following the Multi-PIE 68 points mark-up, please see Fig. 2) for the following databases:
300-W [part1][part2][part3][part4]
Please note that the database is simply split into 4 smaller parts for easier download. In order to create the database you have to unzip part1 (i.e., 300w.zip.001) using a file archiver (e.g., 7zip).
C. Sagonas, E. Antonakos, G, Tzimiropoulos, S. Zafeiriou, M. Pantic. 300 faces In-the-wild challenge: Database and results. Image and Vision Computing (IMAVIS), Special Issue on Facial Landmark Localisation "In-The-Wild". 2016.
C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, M. Pantic. A semi-automatic methodology for facial landmark annotation. Proceedings of IEEE Int’l Conf. Computer Vision and Pattern Recognition (CVPR-W), 5th Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2013). Oregon, USA, June 2013.
Description:
Existing facial databases cover large variations including: different subjects, poses, illumination, occlusions etc. However, the provided annotations appear to have several limitations.
Figure 1: (a)-(d) Annotated images from MultiPIE, XM2VTS, AR, FRGC Ver.2 databases, and (e) examples from XM2VTS with inaccurate annotations.
These problems make cross-database experiments and comparisons between different methods almost infeasible. To overcome these difficulties, we propose a semi-automatic annotation methodology for annotating massive face datasets. This is the first attempt to create a tool suitable for annotating massive facial databases.
All the annotations are provided for research purposes ONLY (NO commercial products).
Figure 2: The 68 points mark-up used for our annotations.
Download:
We employed our tool for creating annotations (following the Multi-PIE 68 points mark-up, please see Fig. 2) for the following databases:
Please note that the database is simply split into 4 smaller parts for easier download. In order to create the database you have to unzip part1 (i.e., 300w.zip.001) using a file archiver (e.g., 7zip).
References:
Please cite as:
Contact:
Christos Sagonas - c.sagonas@imperial.ac.uk / Stefanos Zafeiriou - s.zafeiriou@imperial.ac.uk
https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/