Closed st100945 closed 6 years ago
The head pose transformation should be the inverse/transpose of the camera transformation. I suggest you take a book like Multi-view Geometry from Hartley & Zisserman for help. Actually the linear algorithm in eos estimates the head pose transform, not the camera. The demo app has an example of how to get the Euler angles, eos has too. You could also take fit-model-ceres as an example and put whichever variables or projection model you'd like.
Thanks for helping me out here. As you mentioned eos estimates the head pose transformation relatively to the camera. Just to get this clear (as I didn't find an explicit declaration in the papers) does the centroid of the Euler Angles thrown by the demo app lie in the camera's pinhole (in case the othographic camera projection model is applied)?
It's probably either in the camera centre or in the model's centre, you can probably very easily find out with some experimenting. In case it's the latter, it'll obviously depend on where a particular model's centre is.
Is there a way to retrieve the landmarks to their corresponding image points after the model was fitted? For example fit-model-ceres uses previously defined landmarks in the image to fit the 3DMM. Is there a soution the other way round to retrieve the landmarks from the image points and a fitted model?
I don't really understand what you mean, but you can get the 3D and 2D positions for any vertices in the model. After you've fitted you can use draw_sample
(and if you wish, subsequently sample_to_mesh
), to access the 3D coordinates, and then also project to 2D if needed.
The eos library provides a linear scaled orthographic projection estimation of the camera pose. Is there also the possibilty to retrieve the current head pose? Or alternatively translate the camera pose into the head pose? To be specific the head pose in terms of the euler angles or rotation with its centroid for example on the nose tip and directed towards the "camera".