Hi, thanks for your great work, I have some questions about inference on my own data.
Like Revisiting data normalization for appearance-based gaze estimation
and Learning-by-synthesis for appearance-based 3d gaze estimation , the face image need to be normalized using some matrix, but you mentioned in #30 that you do not do any face normalization.
So when I test a new image on a model trained on the face images in Gaze360, I just need to crop the face using the bounding box and feed it into the model?
Hi, thanks for your great work, I have some questions about inference on my own data. Like Revisiting data normalization for appearance-based gaze estimation and Learning-by-synthesis for appearance-based 3d gaze estimation , the face image need to be normalized using some matrix, but you mentioned in #30 that you do not do any face normalization. So when I test a new image on a model trained on the face images in Gaze360, I just need to crop the face using the bounding box and feed it into the model?