DanJun6737 / TransFace

[ICCV 2023] TransFace: Calibrating Transformer Training for Face Recognition from a Data-Centric Perspective
52 stars 7 forks source link

Face patch extraction #12

Closed Chethan-Babu-stack closed 1 month ago

Chethan-Babu-stack commented 1 month ago

I have images and the face landmark positions.

To test my images using TransFace, I would like to extract 112*112 face patches similar to how it was done during training (for a fair comparison).

The paper says "crop all the input images to 112×112 by RetinaFace [10, 21]" in the section Training Settings. I'm not able to get any information how face patches were produced using original images and the face landmarks.

I see people using this https://github.com/JDAI-CV/FaceX-Zoo/blob/main/face_sdk/core/image_cropper/arcface_cropper/FaceRecImageCropper.py for extracting face patches. Is this the way face patches produced during training?

DanJun6737 commented 1 month ago

Hi~ @Chethan-Babu-stack

In fact, the training sets (including MS1MV2 and Glint360K) we used in TransFace have already been preprocessed.

For the facial image preprocessing process, you can refer to the ArcFace. We followed its processing approach.

By feeding preprocessed 112x112 facial images into the TransFace model, you can obtain multiple patch images. Maybe you can refer to the "class PatchEmbed" in vit.py.