haibo-qiu / FROM

[TPAMI 2021] End2End Occluded Face Recognition by Masking Corrupted Features
https://arxiv.org/abs/2108.09468
43 stars 8 forks source link

preprocessing images #6

Closed Carinazhao22 closed 1 year ago

Carinazhao22 commented 1 year ago

Hello,

I have preprocessed images to 112x112 using MTCNN. How can I convert them to 112x96 used in the papers? Is it correct way to do this? Thanks!

haibo-qiu commented 1 year ago

Hi @Carinazhao22,

The default way is to use the generated CASIA-112x96-LMDB.lmdb we provided. The 112x96 format images we used are originally from CosFace, you may refer to these preprocessing steps.

Another possible solution is using this file for cropping. The usage is like this:

python -u face_align_crop.py -j 8 -source_root PATH_TO_DATA -dest_root PATH_TO_SAVE

I used this file to crop the synthetic face images with 256x256 to 112x96, and it also can take 112x112 as input. But I am not sure whether it works the same as CosFace preprocessing.

Generally, if applying the same cropping strategy for both training and testing data, the network is supposed to work.

Carinazhao22 commented 1 year ago

Thanks. It works.

haibo-qiu commented 1 year ago

:thumbsup::thumbsup: