WIKI2020 / FacePose_pytorch

🔥🔥The pytorch implement of the head pose estimation(yaw,roll,pitch) and emotion detection with SOTA performance in real time.Easy to deploy, easy to use, and high accuracy.Solve all problems of face detection at one time.(极简,极快,高效是我们的宗旨)
MIT License
722 stars 121 forks source link

Use different landmarks model (original: plfd) #14

Open windspirit95 opened 4 years ago

windspirit95 commented 4 years ago

Hi, Could I use the different landmarks extraction model, such as the 106-landmarks model in this repo (https://github.com/deepinsight/insightface)? In your open-source code, I see that you refer to point1, point31, point51, point60, and point72 for calculating the yaw/pitch/roll, which could be similar to point 9, point 25, point 72, point 35 and point 93 in 106-landmarks map. Do I need to change params in formulations inside your code, such as yaw = int(yaw * 71.58 + 0.7037)? Thank you.

WIKI2020 commented 4 years ago

you can try!

windspirit95 commented 4 years ago

I have tested and it's worked. Also, the performance on the face mask is nice. Just need to calibrate the pitch a little base on bias value.

WIKI2020 commented 4 years ago

Goodjob! At the same time, please also pay attention to our new model which will be launched later

windspirit95 commented 4 years ago

Thanks, I am still looking forward to your update model :D

windspirit95 commented 4 years ago

About the crossover calculation, could you explain what is the crossover point? I see there is some division could return NaN in case x2 = x1 in point_line function, so I want to know more clearly about this ^^ Thanks.

windspirit95 commented 4 years ago

I got it now, so I think you should cover this case in your code :)

Amoswish commented 3 years ago

why i try the model in (https://github.com/deepinsight/insightface)? and use the point 9, point 25, point 72, point 35 and point 93 ,but get wrong face pose angle.

windspirit95 commented 3 years ago

I don't know how "wrong" could it be in your case, but it worked in my case and I modified: pitch = int(1.497 * pitch_dis + 5.2) to calibrate the pitch. Also the img need to be resized to (192, 192) if you want to use that model ^^ Note that for yaw: point9, point25, point72 are used; pitch: point72; roll: point35, point93.

Amoswish commented 3 years ago

I have tried the method, in my case, the yaw and roll do not need to calibrate. but the 5.2 in pitch = int(1.497 * pitch_dis + 5.2) is different with you. I need to do more experiment to calibrate the pitch

windspirit95 commented 3 years ago

Yeah, actually the pitch will be different when you change the image size from 112 to 192 as the distance between 2 points will be scaled up. I haven't carried many experiments on it, so my numbers are still not exactly the correct one ^^ P/s: After rechecked my final code again, I have modified the pitch as: pitch = int(3.055 * pitch_dis + 13.895). Sorry ^^

WIKI2020 commented 3 years ago

@windspirit95 now ,you can use the new key point algorithm

windspirit95 commented 3 years ago

@WIKI2020 Thanks for your update. I appreciate that ^^ Could you suggest me how to update the pose calculations using the new key point algorithm, as I see it is 3D landmarks? Thanks.