Open jimo17 opened 2 years ago
Hi,
The dataset is randomly splited into 80% for training and 20% for testing, according to each identity.
Thanks a lot for your answer. Could you please provide the Webface dataset for training Inception-ResNet in the paper?
Have a look at this thread. https://github.com/happynear/AMSoftmax/issues/18
Thanks a lot for your answer. Is the image size of the Webface dataset you used when training Inception-ResNet 112*112? I look in the exp_setting.sh file, the file name of the dataset is written as casia-112x112-protected-train.
Yes, we use 112*112 for both training and testing.
Thanks a lot for your answer. The Webface I downloaded is 250*250. So I would like to ask, have you processed the dataset? Or maybe I downloaded the wrong dataset?
Apologies, I lost track of where I downloaded the dataset. Here is another pre-processed version with 112*112. I believe I downloaded the preprocessed version. https://github.com/yule-li/CosFace/blob/master/README.md
Thanks a lot for your answer. I have one more question. Does the script 'Unlearnable-Examples-main/scripts/face/min-min-noise/train_clean.sh' use clean face images to train the Inception-ResNet model?
Yes, the train_clean.sh is the baseline that is trained with the original clean dataset.
Thanks a lot for your answer. If I want to use clean face dataset to train Inception-ResNet model according to your settings,do I need to uncomment out the following code?
https://github.com/HanxunH/Unlearnable-Examples/blob/main/main.py#L102 (contains L102-L116 )
This part is for evaluating face verifications on LFW, depending if you need the results or not.
Hi, thanks for open source. I am very interested in your paper. I have some questions. When you use the WebFace dataset to train the Inception-ResNet network, how are the training set and test set divided?