Closed vadik6666 closed 1 year ago
all information can be found in this readme: https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/head-pose-estimation-adas-0001/README.md please take a look
@eaidova Thank you. I've read that page, and it only describes validation dataset, not training dataset. Correct me if I'm wrong.
Would appreciate if you share any information about training dataset
for head pose model.
Hello @vadik6666. We used internal dataset for this task. We used following procedure for annotation and training:
1) Generate artificial 3d head and rotate it on uniform grid in (yaw, pitch, roll) space and create images with known annotation.
2) Compute facial landmarks for previous images.
3) Compute facial landmarks for real images.
4) Match facial landmarks between real and artificial images. Appoint annotation for real image from artificial image with the best matched landmarks.
5) Train network with two head: one for facial landmarks classification and one for (yaw, pitch, roll) regression. We appoint usual L2 loss for artificial images and (L2 if loss < th else 0) for real images. We used different augmentations: random background for artificial images, alpha blending for real images, draw random lines and circles on real images and etc.
6) Remove facial landmark classification head.
@andreyanufr thank you very much for detailed answer
Hello!
Thank you)