Rassibassi / mediapipeDemos

Real-time Python demos of google mediapipe
124 stars 37 forks source link

Face Mesh pose correction #8

Open samirchar opened 2 years ago

samirchar commented 2 years ago

Hi Rasmus,

Great job with the repo, it has helped me a lot! I have a quick question and I was hoping you could point me in the correct direction. I'm working on a project in which I need to detect some particular expressions of the face, and I have a solution but only works with frontal faces. What would you think is the most appropriate way of "correcting head pose"? My guess is that I can do something with the metric landmarks or rotation vector from your head_posture.py module.

Thank you very much!!

Rassibassi commented 2 years ago

Hi,

it seems like you are looking for a face patch normalization depending on the head rotation as proposed here.

However, I think it is cleaner to have a facial expression recognition that can handle non frontal faces. I've played around with facial expression recognition (using https://github.com/zengqunzhao/EfficientFace) some time ago, and have a working demo for you, see facial_expression.py. I also updated the readme with instructions on how to download the weights for the facial expression recognition DNN. The models are in float32, yet if required, I could get them down to int8, let me know, then I'll have a look at the quantization.

Best, Rasmus

Rassibassi commented 2 years ago

I also have the face patch normalization algorithm lying around if you are interested in that, I can dig that one out, too :)

samirchar commented 2 years ago

Hi Rasmus,

Thanks for your reply and useful link! However, I'm not exactly doing facial expressions, but analyzing facial asymmetries on specific parts of the face for a medical application.

In the long run you are right, the plan is using deep learning. But I first want to establish a baseline using only the landmarks. For this it would be ideal to get the 3D landmarks of the person and rotate it in the 3 axes such that the landmarks are frontal.

Thank you very much!

Rassibassi commented 2 years ago

Then the result of cv2.solvePnP or the pose_transform_mat is what you are looking for. They are the same, and they define a rotation/translation connecting the inferred face mesh points with a canonical frontal facing face mesh.

Be careful with the facial landmarks from the mediapipe face mesh unit, as the algorithm relies heavily on the canonical face mesh, overwriting user specific facial landmarks.

samirchar commented 2 years ago

Perfect, that's what I thought, thank you! I was wondering since I want a frontal version of the person's landmarks, couldn't I just use the "metric landmark" in your code? and maybe project that to image space somehow.