patrikhuber / superviseddescent

C++11 implementation of the supervised descent optimisation method
http://patrikhuber.github.io/superviseddescent/
Apache License 2.0
402 stars 188 forks source link

Details about pose estimation #25

Closed tpengti closed 8 years ago

tpengti commented 8 years ago

Hello patrihhuber,

I have a question about your code. firstly,Mat landmarks = (cv::Mat_(1, 20) << 498.0f, 504.0f, 479.0f, 498.0f, 529.0f, 553.0f, 489.0f, 503.0f, 527.0f, 503.0f, 502.0f, 513.0f, 457.0f, 465.0f, 471.0f, 471.0f, 522.0f, 522.0f, 530.0f, 536.0f); These landmarks How to get? I use other way get landmarks but can not get the right pose_estimation。 secondly, Mat facemodel; // The point numbers are from the iBug landmarking scheme Will this time dedicated to other image file of the test need to change it? Thanks!

patrikhuber commented 8 years ago

Hi,

Mat landmarks: I just landmarked them by hand on an image and copied the coordinates, the goal is just to provide a simple example. It's the 2D landmarks on a face image.

Will this time dedicated to other image file of the test need to change it?

I'm sorry, I unfortunately do not understand your question!

tpengti commented 8 years ago

Hi, Patrik Thank you for your reply. your “Mat landmarks” coordinate is around 500 I have read a .pts file about ibug ,I get landmark coordinate e.g,187.844833,186.082748,96.007217,146.086288,231.066940,280.085144,133.310715,186.358597,237.496811,186.515488,342.358246,366.692749,264.464661,273.385010,282.680420,282.681854,401.905457,404.223694,404.245697,425.630554 But your code output:pitch=-510.209, yaw=1159.74, roll=463.321

patrikhuber commented 8 years ago

Hi,

You have to keep in mind my example is a toy-example to show how to use the library and how to make it work for your own applications. I think what is most likely happening could be the following: If you look at these lines I just generate 500 random training samples. Probably the values of your landmark coordinates do not match the distribution of these training samples, that's why you get an unexpected result. You should think about what training data to use, and also you would probably want some kind of normalisation of the translation, the translation is actually probably the main reason why your example doesn't work.

tpengti commented 8 years ago

Hi, Patrik your reply:Probably the values of your landmark coordinates do not match the distribution of these training samples. I also think,but I don't know how to match the distribution of these training samples? how to transform? I hope you can send me about code。 thanks!

patrikhuber commented 8 years ago

You should read the original paper, you can find it on arXiv. Particularly the parts about the pose estimation, since that's what you want to do. It really depends on your application, but I think the first thing I'd think about is how to handle translation. The paper should give you a good idea about that. Also in my example I just generate 500 random samples, you can of course generate more, or even better, learn from a database like AFLW or something like that, they have over 25'000 faces with annotated pose labels.