Closed a26z15 closed 1 year ago
Hi @a26z15 ,
You can access the 112x112 version of LFW in this data zoo. Then you need to change the corresponding data path:
https://github.com/haibo-qiu/FROM/blob/416c13c0133a33dbb0dc4aa58bcdb5a76e4f20a3/data/generate_occ_lfw.py#L14-L19
Now running data/generate_occ_lfw.py
is supposed to generate Occ-LFW-2.0 dataset.
Hi @a26z15 ,
You can access the 112x112 version of LFW in this data zoo. Then you need to change the corresponding data path:
Now running
data/generate_occ_lfw.py
is supposed to generate Occ-LFW-2.0 dataset.
Thanks for your reply! The link you mentioned contains 112x112 version of LFW, but it's a ' .bin '. What should I do to get the 112x112 version like the dataset you provided ( data/datasets/lfw-occ/lfw-112X96 )? Also, should I change these coordinates to get the 112x112 version?
Thanks for your help!
Hi @a26z15,
Yes, that dataset zoo only provides .bin format, but you can use their code to unzip https://github.com/ZhaoJ9014/face.evoLVe/blob/722ecfd769006c9c9de1cf81203807e02ddac7e5/util/utils.py#L55-L69 The obtained carray should be the image array (refer to their usage), which you can save as an image.
You may also find face recognition datasets in insightface project and use the 128x128 version LFW in this repo.
As for the specific areas in data/generate_occ_lfw.py
, you can just ignore them because Occ-LFW-2.0 dataset is generated by blocking the random area, which you can refer to this function
Thanks for your reply! I have successfully generated the 112X112 version of lfw and the occ version. I also want to know how can I get access to the 112X112 version of ar_face_data? Thanks again!
Actually, I do not know where to obtain the 112x112 version of ar_face_data, and you may check the original paper or contact the authors. The used 112x96 version is directly obtaind from my group when I was an intern at Tencent.
OK! Thanks a lot!
Hi, @haibo-qiu ,sorry to bother you again. I got some questions when studying the codes.
First, in the codes: https://github.com/haibo-qiu/FROM/blob/416c13c0133a33dbb0dc4aa58bcdb5a76e4f20a3/lib/models/fpn.py#LL353C5-L376C38 , my understanding is : ' fc_mask ' represents the clean features after mask, ' mask ' represents the mask of an image got from MD, ' vec ' is used to supervise the feature masks learning, ' fc ' is the original feature extracted from the backbone network. Am I right?
Second, in the codes: https://github.com/haibo-qiu/FROM/blob/416c13c0133a33dbb0dc4aa58bcdb5a76e4f20a3/lib/core/lfw_eval.py#LL102C8-L103C76 and https://github.com/haibo-qiu/FROM/blob/416c13c0133a33dbb0dc4aa58bcdb5a76e4f20a3/lib/core/lfw_eval.py#LL115C9-L116C77 , there are 2 ‘ extractDeepFeature ’, why?
Thanks for your help again!
Hi @a26z15,
Thanks for answering! About the second question, I mean that after the
f2, f2_mask, mask2 = extractDeepFeature(img2, model, is_gray, None)
f1, f1_mask, mask1 = extractDeepFeature(img1, model, is_gray, None)
there are another two ' extractDeepFeature ' with mask to go through
f2, f2_mask, mask2 = extractDeepFeature(img2, model, is_gray, mask2)
f1, f1_mask, mask1 = extractDeepFeature(img1, model, is_gray, mask1)
why?
Hi @a26z15,
This is a legacy issue. As you can see from the commented code, I conducted ablation studies on the mask format such as soft and binary version, which are reported in our paper. Therefore, the second extractDeepFeature
is redundant in current setting, which you may just ignore or comment out .
https://github.com/haibo-qiu/FROM/blob/416c13c0133a33dbb0dc4aa58bcdb5a76e4f20a3/lib/core/lfw_eval.py#L108-L116
Thanks for sharing the codes! I'd like to use the test datasets ( lfws.zip and ar_face_data.tar.gz ) you provided to test my model. But the image size is '112×96', my model takes '112×112' as input size. How can I get access to the datasets with '112×112' as input size, and generate the Occ-LFW-2.0 dataset? Thanks for answering!