I tried to test by adding face encoding of one person to last of known_face_encoding list.
And, the matching result of compare_face method was bad.
When I test with face encodings of 100 people from dataset, I get the right results.
However, when I test with face encodings of 1000 people from dataset, I get the wrong results. The found face on test image was labelled with the name of other person. I tested it with number_of_times_to_upsample=2 due to images with low resolution.
Actually, when I add face encoding of test to known face encoding list as first element, the result of comparison of faces is the right and labeled with right name.
What is the difference between the process of comparing face at first element or last element at known encoding list?
Why does the compare_face method not match correctly with large dataset?
After the face detection on image, is it necessary to scale with same size of found all face to get right matching? Is the pixel difference a problem?
Hi,
I tried to test by adding face encoding of one person to last of known_face_encoding list.
And, the matching result of compare_face method was bad. When I test with face encodings of 100 people from dataset, I get the right results. However, when I test with face encodings of 1000 people from dataset, I get the wrong results. The found face on test image was labelled with the name of other person. I tested it with number_of_times_to_upsample=2 due to images with low resolution.
Actually, when I add face encoding of test to known face encoding list as first element, the result of comparison of faces is the right and labeled with right name.
What is the difference between the process of comparing face at first element or last element at known encoding list?
Why does the compare_face method not match correctly with large dataset?
After the face detection on image, is it necessary to scale with same size of found all face to get right matching? Is the pixel difference a problem?