suzhigangssz / AVIH

Code for Visual Information Hiding Based on Obfuscating Adversarial Perturbations
12 stars 2 forks source link

some questions about 'Evaluation metrics' #7

Open Klara-Wingler opened 5 months ago

Klara-Wingler commented 5 months ago

"To evaluate the effectiveness of our method more realistically and inspired by the evaluation method in MegaFace [21], we modified the evaluation method of LFW. We randomly selected 12 persons from the LFW dataset as the probe set. Each contains more than 12 facial images, comprising 355 images. The other 12878 images, we use as the gallery set. In the testing phase, we take one face image of a person in the probe set and put it into the gallery set. Then use the remaining images of this person as the test set. Next, we use the above-divided dataset to test the accuracy of the face recognition model. In this way, we put each person’s image in the probe set into the gallery set in turn to measure the average accuracy. This metric can well demonstrate the impact of our method on face recognition models in practical applications"

Hello, author, I'd like to know how exactly this works, I found that the number of persons who have more than 12 facial images is only 117, and think about 355 images, there are less optional persons for probe set. Is that reasonable? or I misread? hope for your reply, thank you very much!

suzhigangssz commented 5 months ago

"To evaluate the effectiveness of our method more realistically and inspired by the evaluation method in MegaFace [21], we modified the evaluation method of LFW. We randomly selected 12 persons from the LFW dataset as the probe set. Each contains more than 12 facial images, comprising 355 images. The other 12878 images, we use as the gallery set. In the testing phase, we take one face image of a person in the probe set and put it into the gallery set. Then use the remaining images of this person as the test set. Next, we use the above-divided dataset to test the accuracy of the face recognition model. In this way, we put each person’s image in the probe set into the gallery set in turn to measure the average accuracy. This metric can well demonstrate the impact of our method on face recognition models in practical applications"

Hello, author, I'd like to know how exactly this works, I found that the number of persons who have more than 12 facial images is only 117, and think about 355 images, there are less optional persons for probe set. Is that reasonable? or I misread? hope for your reply, thank you very much!

"To evaluate the effectiveness of our method more realistically and inspired by the evaluation method in MegaFace [21], we modified the evaluation method of LFW. We randomly selected 12 persons from the LFW dataset as the probe set. Each contains more than 12 facial images, comprising 355 images. The other 12878 images, we use as the gallery set. In the testing phase, we take one face image of a person in the probe set and put it into the gallery set. Then use the remaining images of this person as the test set. Next, we use the above-divided dataset to test the accuracy of the face recognition model. In this way, we put each person’s image in the probe set into the gallery set in turn to measure the average accuracy. This metric can well demonstrate the impact of our method on face recognition models in practical applications"

Hello, author, I'd like to know how exactly this works, I found that the number of persons who have more than 12 facial images is only 117, and think about 355 images, there are less optional persons for probe set. Is that reasonable? or I misread? hope for your reply, thank you very much!

The meaning in the paper is to select 12 persons, which contain a total of 355 images. Not select 355 persons.

Klara-Wingler commented 5 months ago

作者您好,我就直接用中文问了 我使用您提供的训练好的ArcFace的模型去提取 原始图片的特征,然后计算特征的余弦相似度,但是不同人脸的余弦相似度为什么都会有0.99呢,下面是用Bill_Gates/0分别与其他图片计算的余弦相似度,我还在整个lfw集上测了,准确率很低 Abdullah_Gul/0 [0.9992881] Abdullah_Gul/1 [0.9993425] Bill_Gates/0 [1.] Bill_Gates/1 [0.99956083] Howard_Dean/0 [0.99933255] Howard_Dean/1 [0.9992428] 另外我又测了欧拉距离,发现也分辨不出来,不知道哪里出来问题,希望您能解答 我是初学者,非常抱歉打扰您

suzhigangssz commented 5 months ago

作者您好,我就直接用中文问了 我使用您提供的训练好的ArcFace的模型去提取 原始图片的特征,然后计算特征的余弦相似度,但是不同人脸的余弦相似度为什么都会有0.99呢,下面是用Bill_Gates/0分别与其他图片计算的余弦相似度,我还在整个lfw集上测了,准确率很低 Abdullah_Gul/0 [0.9992881] Abdullah_Gul/1 [0.9993425] Bill_Gates/0 [1.] Bill_Gates/1 [0.99956083] Howard_Dean/0 [0.99933255] Howard_Dean/1 [0.9992428] 另外我又测了欧拉距离,发现也分辨不出来,不知道哪里出来问题,希望您能解答 我是初学者,非常抱歉打扰您

如果使用的是代码中引用的ArcFace注意他的输入范围是[0,255]