Open windspirit95 opened 4 years ago
you can try!
I have tested and it's worked. Also, the performance on the face mask is nice. Just need to calibrate the pitch a little base on bias value.
Goodjob! At the same time, please also pay attention to our new model which will be launched later
Thanks, I am still looking forward to your update model :D
About the crossover calculation, could you explain what is the crossover point? I see there is some division could return NaN in case x2 = x1 in point_line function, so I want to know more clearly about this ^^ Thanks.
I got it now, so I think you should cover this case in your code :)
why i try the model in (https://github.com/deepinsight/insightface)? and use the point 9, point 25, point 72, point 35 and point 93 ,but get wrong face pose angle.
I don't know how "wrong" could it be in your case, but it worked in my case and I modified: pitch = int(1.497 * pitch_dis + 5.2) to calibrate the pitch. Also the img need to be resized to (192, 192) if you want to use that model ^^ Note that for yaw: point9, point25, point72 are used; pitch: point72; roll: point35, point93.
I have tried the method, in my case, the yaw and roll do not need to calibrate. but the 5.2 in pitch = int(1.497 * pitch_dis + 5.2) is different with you. I need to do more experiment to calibrate the pitch
Yeah, actually the pitch will be different when you change the image size from 112 to 192 as the distance between 2 points will be scaled up. I haven't carried many experiments on it, so my numbers are still not exactly the correct one ^^ P/s: After rechecked my final code again, I have modified the pitch as: pitch = int(3.055 * pitch_dis + 13.895). Sorry ^^
@windspirit95 now ,you can use the new key point algorithm
@WIKI2020 Thanks for your update. I appreciate that ^^ Could you suggest me how to update the pose calculations using the new key point algorithm, as I see it is 3D landmarks? Thanks.
Hi, Could I use the different landmarks extraction model, such as the 106-landmarks model in this repo (https://github.com/deepinsight/insightface)? In your open-source code, I see that you refer to point1, point31, point51, point60, and point72 for calculating the yaw/pitch/roll, which could be similar to point 9, point 25, point 72, point 35 and point 93 in 106-landmarks map. Do I need to change params in formulations inside your code, such as yaw = int(yaw * 71.58 + 0.7037)? Thank you.