swook / GazeML

Gaze Estimation using Deep Learning, a Tensorflow-based framework.
MIT License
507 stars 141 forks source link

Some questions about the 'unityeyes.py' #83

Open err-or opened 3 years ago

err-or commented 3 years ago

Hi @swook! Thank you for your repo. Both your papers and the repo is a great asset in the gaze estimation research field.

I am also working with the Unityeyes simulator and I have some questions regarding your 'unityeyes.py' file where you are processing the synthetic images. I am an amateur and some of the questions might appear very basic. However, the followings are my confusions:

  1. In the process_coords function, why are you subtracting y coordinates from the height of the image? I have seen the same in visualization code, 'visualize.py', provided by Unityeyes in their repo but did not quite understand the reason behind it.
  2. At line 159 you are correcting the headpose. Why do we need to make this correction? (I tried to generate image from Unityeyes keeping the camera parameters as [0,0,0,0], however the headpose angle in this setting always appears as [0, 180, 0] for all the images)
  3. While converting look vector to polar angles at 211, why are you reversing the first value by look_vec[0] = -look_vec[0]? (While in 'visualize.py', they are reversing the second value by look_vec[1] = -look_vec[1])
  4. What is the reason behind this correction of the yaw angle at line 215?

I have tried to look for the answers to this question both in your paper and the unityeyes page but could not nearly get any explanation. It would be really grateful if you could provide some insight. Thank you kindly!