ageitgey / face_recognition

The world's simplest facial recognition api for Python and the command line
MIT License
53.06k stars 13.45k forks source link

How can I give Aligned face to encoding? #1075

Open pasa13142 opened 4 years ago

pasa13142 commented 4 years ago

Description

I find face locations and then I want to give them face_encodings, but any reason that I can not to tell I dont have coordinates,just whole aligned face picture I have. your encoding looking for face's again if known_face is None. How can I give whole aligned faces to get encoding ?

pasa13142 commented 4 years ago

@ageitgey Firstly thanks for this amazing project, I have totally aligned face image and I want to encoding with face_recognition, how should I do, face encodings works well with face detection's face locations or without any preprocess, but when I give it aligned face from face detection ( no face_locations, aligned face image) its not work well :/

pasa13142 commented 4 years ago

Is there any way could I gave whole image to encodings, face_encodings looking for face again and even I gave it aligned face picture, it returns no face, please help

ageitgey commented 4 years ago

Use the known_face_locations parameter of https://face-recognition.readthedocs.io/en/latest/face_recognition.html#face_recognition.api.face_encodings and pass in the whole image size as the location of the face. That will force it to use the entire image directly.

Example: You have a pre-cropped 100x100 image of a face.

encodings = face_encodings(my_cropped_image, known_face_locations=[[0, 100, 100, 0]])

Something like that should work.

pasa13142 commented 4 years ago

Actually I did it before, but there is never certain result, if i do it with 0's at edges, face picture is slide a bit, not get whole aligned image

sevenold commented 4 years ago

@ageitgey I try to use RetinaFace for face detection, and then use face_recognition.face_encodings to get face features, but when I use the same face image for testing, I get results that the similarity of the two methods is very low.

    # bbox
    x1, y1, x2, y2 = int(ann[0] * img_width), int(ann[1] * img_height), \
                            int(ann[2] * img_width), int(ann[3] * img_height)

    img = "./asset/2.jpg"
    face_img, locations = detect_face(img)
    print(face_img[0].shape, locations)
    feature1 = face_recognition.face_encodings(face_img[0], known_face_locations=locations)[0]

    face = face_recognition.load_image_file(img)
    feature2 = face_recognition.face_encodings(face)[0]

    dist = np.dot(feature1,feature2)/(np.linalg.norm(feature1)*(np.linalg.norm(feature2)))
    print(dist)

    res = face_recognition.compare_faces([feature2], feature1)
    print(res)

    res = face_recognition.face_distance([feature2], feature1)
    print(res)

(245, 191, 3) [[309, 122, 500, 367]]
0.8277786241274802
[False]
[0.80969032]

Thanks.