Closed norvikgustav closed 6 years ago
You are mixing up two different things.
The "training" that the article is talking about there is how you can train a new face encoding model to generate face encodings from face images. That training has already been done for you here when you use this library. When you call the face_encodings()
function, it uses a pre-trained model to generate those face encodings.
Then in the code example you gave, you are looking at how the face encodings extracted from a few sample images are used to check if the faces in those images match or not. That's a question about usage of the model, not about how the original model was trained.
Hope that helps clear it up.
Description
Hi! First of all; thank you for your awesome work. Fantastic!
According to "Machine learing is fun: Part 4" the training process is described as: The training process works by looking at 3 face images at a time: Load a training face image of a known person Load another picture of the same known person Load a picture of a totally different person https://medium.com/@ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78
In the code, for example in _facerec_from_webcamfaster.py [line 16 - 22] only TWO training images are loaded.
I see this as a paradox. Since, as I see it, the 'theory' doesn't fit the code. Could you explain this please?
What I Did
Line 16 - 22: _# Load a sample picture and learn how to recognize it. obama_image = face_recognition.load_image_file("obama.jpg") obama_face_encoding = face_recognition.face_encodings(obamaimage)[0]
_# Load a second sample picture and learn how to recognize it. biden_image = face_recognition.load_image_file("biden.jpg") biden_face_encoding = face_recognition.face_encodings(bidenimage)[0]