Closed Deccan12 closed 3 years ago
@Deccan12 I assume you are overriding some of the variables somewhere. For example, the following code works as expected (ie. the landmarks and their shape is different):
import face_alignment
fa2D = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device='cpu', flip_input=True)
fa3D = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, device='cpu', flip_input=True)
preds = fa2D.get_landmarks('face1.jpg')
print(preds)
preds = fa3D.get_landmarks('face1.jpg')
print(preds)
It does generate different ouput, does it have anything to do with flip_input argument?
The ._2D should generate 68x2, and ._3D 68x3, with different locations in general. The ._2.5D should match with the 3D ones. The flip argument simply runs the networks twice, once with normal image and another time with its flipped version. It then averages the predictions.
I ran the code for 100's of images with following 2 detectors:
fa2D = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, device='cpu', flip_input=True)
fa3D = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, device='cpu', flip_input=True)
But for both of these configuration, i got exact same output, of dimensions (68,3), is it normal or am I missing anything? Shouldn't 2D and 3D landmarks be different at least in dimensions?