1adrianb / face-alignment

:fire: 2D and 3D Face alignment library build using pytorch
https://www.adrianbulat.com
BSD 3-Clause "New" or "Revised" License
6.89k stars 1.33k forks source link

Same output for 2d and 3d detectors? #237

Closed Deccan12 closed 3 years ago

Deccan12 commented 3 years ago

I ran the code for 100's of images with following 2 detectors:

fa2D = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device='cpu', flip_input=True)

fa3D = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, device='cpu', flip_input=True)

But for both of these configuration, i got exact same output, of dimensions (68,3), is it normal or am I missing anything? Shouldn't 2D and 3D landmarks be different at least in dimensions?

1adrianb commented 3 years ago

@Deccan12 I assume you are overriding some of the variables somewhere. For example, the following code works as expected (ie. the landmarks and their shape is different):

import face_alignment

fa2D = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device='cpu', flip_input=True)
fa3D = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, device='cpu', flip_input=True)

preds = fa2D.get_landmarks('face1.jpg')
print(preds)
preds = fa3D.get_landmarks('face1.jpg')
print(preds)
Deccan12 commented 3 years ago

It does generate different ouput, does it have anything to do with flip_input argument?

1adrianb commented 3 years ago

The ._2D should generate 68x2, and ._3D 68x3, with different locations in general. The ._2.5D should match with the 3D ones. The flip argument simply runs the networks twice, once with normal image and another time with its flipped version. It then averages the predictions.