haibo-qiu / FROM

[TPAMI 2021] End2End Occluded Face Recognition by Masking Corrupted Features
https://arxiv.org/abs/2108.09468
43 stars 8 forks source link

About the test datasets #11

Closed a26z15 closed 1 year ago

a26z15 commented 1 year ago

Thanks for sharing the codes! I'd like to use the test datasets ( lfws.zip and ar_face_data.tar.gz ) you provided to test my model. But the image size is '112×96', my model takes '112×112' as input size. How can I get access to the datasets with '112×112' as input size, and generate the Occ-LFW-2.0 dataset? Thanks for answering!

haibo-qiu commented 1 year ago

Hi @a26z15 ,

You can access the 112x112 version of LFW in this data zoo. Then you need to change the corresponding data path: https://github.com/haibo-qiu/FROM/blob/416c13c0133a33dbb0dc4aa58bcdb5a76e4f20a3/data/generate_occ_lfw.py#L14-L19 Now running data/generate_occ_lfw.py is supposed to generate Occ-LFW-2.0 dataset.

a26z15 commented 1 year ago

Hi @a26z15 ,

You can access the 112x112 version of LFW in this data zoo. Then you need to change the corresponding data path:

https://github.com/haibo-qiu/FROM/blob/416c13c0133a33dbb0dc4aa58bcdb5a76e4f20a3/data/generate_occ_lfw.py#L14-L19

Now running data/generate_occ_lfw.py is supposed to generate Occ-LFW-2.0 dataset.

Thanks for your reply! The link you mentioned contains 112x112 version of LFW, but it's a ' .bin '. What should I do to get the 112x112 version like the dataset you provided ( data/datasets/lfw-occ/lfw-112X96 )? Also, should I change these coordinates to get the 112x112 version?

image

Thanks for your help!

haibo-qiu commented 1 year ago

Hi @a26z15,

Yes, that dataset zoo only provides .bin format, but you can use their code to unzip https://github.com/ZhaoJ9014/face.evoLVe/blob/722ecfd769006c9c9de1cf81203807e02ddac7e5/util/utils.py#L55-L69 The obtained carray should be the image array (refer to their usage), which you can save as an image.

You may also find face recognition datasets in insightface project and use the 128x128 version LFW in this repo.

As for the specific areas in data/generate_occ_lfw.py, you can just ignore them because Occ-LFW-2.0 dataset is generated by blocking the random area, which you can refer to this function

a26z15 commented 1 year ago

Thanks for your reply! I have successfully generated the 112X112 version of lfw and the occ version. I also want to know how can I get access to the 112X112 version of ar_face_data? Thanks again!

haibo-qiu commented 1 year ago

Actually, I do not know where to obtain the 112x112 version of ar_face_data, and you may check the original paper or contact the authors. The used 112x96 version is directly obtaind from my group when I was an intern at Tencent.

a26z15 commented 1 year ago

OK! Thanks a lot!

a26z15 commented 1 year ago

Hi, @haibo-qiu ,sorry to bother you again. I got some questions when studying the codes.

First, in the codes: https://github.com/haibo-qiu/FROM/blob/416c13c0133a33dbb0dc4aa58bcdb5a76e4f20a3/lib/models/fpn.py#LL353C5-L376C38 , my understanding is : ' fc_mask ' represents the clean features after mask, ' mask ' represents the mask of an image got from MD, ' vec ' is used to supervise the feature masks learning, ' fc ' is the original feature extracted from the backbone network. Am I right?

Second, in the codes: https://github.com/haibo-qiu/FROM/blob/416c13c0133a33dbb0dc4aa58bcdb5a76e4f20a3/lib/core/lfw_eval.py#LL102C8-L103C76 and https://github.com/haibo-qiu/FROM/blob/416c13c0133a33dbb0dc4aa58bcdb5a76e4f20a3/lib/core/lfw_eval.py#LL115C9-L116C77 , there are 2 ‘ extractDeepFeature ’, why?

Thanks for your help again!

haibo-qiu commented 1 year ago

Hi @a26z15,

  1. Your understanding is correct.
  2. The evaluation on LFW aims to predict whether two images are the same or not. https://github.com/haibo-qiu/FROM/blob/416c13c0133a33dbb0dc4aa58bcdb5a76e4f20a3/lib/datasets/dataset.py#L284-L308
a26z15 commented 1 year ago

Thanks for answering! About the second question, I mean that after the

        f2, f2_mask, mask2 = extractDeepFeature(img2, model, is_gray, None)
        f1, f1_mask, mask1 = extractDeepFeature(img1, model, is_gray, None)

there are another two ' extractDeepFeature ' with mask to go through

        f2, f2_mask, mask2 = extractDeepFeature(img2, model, is_gray, mask2)
        f1, f1_mask, mask1 = extractDeepFeature(img1, model, is_gray, mask1)

why?

haibo-qiu commented 1 year ago

Hi @a26z15,

This is a legacy issue. As you can see from the commented code, I conducted ablation studies on the mask format such as soft and binary version, which are reported in our paper. Therefore, the second extractDeepFeature is redundant in current setting, which you may just ignore or comment out . https://github.com/haibo-qiu/FROM/blob/416c13c0133a33dbb0dc4aa58bcdb5a76e4f20a3/lib/core/lfw_eval.py#L108-L116