TadasBaltrusaitis / OpenFace

OpenFace – a state-of-the art tool intended for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.
Other
6.99k stars 1.86k forks source link

Gaze analysis heatmap and blink rate #229

Open Bilabong67 opened 7 years ago

Bilabong67 commented 7 years ago

Hello,

Thanks again for making OF available for us all. This is such a valuable source!

I have 2 aims I would like to achieve through gaze detection:

  1. Gaze "heatmap": I would like to estimate how much time a person spends looking at different parts of the screen (e.g. 10% looking at the top right corner, 50% in the middle etc.). Assuming I divide the screen say in 9 squares. How can I deduce from OF gaze vectors where the person looks on the screen (or even if he looks out of the screen)?

  2. Blink rate: also, is there any easy way to assess a person blink rate (=number of blinks per minute) using OF?

Many thanks!

TadasBaltrusaitis commented 7 years ago

Hi,

Thanks for taking an interest in OpenFace!

To answer your question:

  1. This is a bit tricky if you need accurate measures of the screen, but I believe doable. This will depend on how big the screen is and how far away the person is to the screen. OpenFace gaze vectors are best for relative measures of direction, but it is possible to map them to screen coordinates. To get a mapping of the vector to screen coordinates you will need to perform a calibration that deals with the relative placement of the camera to the screen (and possibly even to account for personal differences).
  2. This should be fairly straightforward. You can use the AU corresponding to the blinks. You will just need to do some filtering to remove consecutive frames labeled as blinks. I haven't tried assessing blink rate using OpenFace before, so don't know how accurate it is.

Thanks, Tadas

Balasem commented 7 years ago

Hi, About question 1, I think here we deal with three coordinate systems with different dimensionality. 1)Camera cords(3d), screen cords(2d), and eye (gaze) cords(3d). The problem (in y opinion) is how to project the eye gaze coordinates into screen cords, any suggestions here? Thanks

navneet1083 commented 7 years ago

Hi, Any update on this? Anyone has got the solution or way to tackle this. Atleast the confidence score for looking towards the screen (not worry about the position on the screen where he is looking). e.g

frame | timestamp | confidence | success | gaze_0_x | gaze_0_y | gaze_0_z | gaze_1_x | gaze_1_y | gaze_1_z ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 260 | 25.9 | 0.883333 | 1 | -0.374037 | -0.114623 | -0.920303 | -0.530515 | -0.152685 | -0.833811 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 263 | 26.2 | 0.85 | 1 | -0.383336 | -0.106057 | -0.917499 | -0.534414 | -0.145568 | -0.832593

Here, frame 260 is not looking into the screen but frame 263 is looking; but it's difficult to infer from gaze's data.

Used following command : ./bin/FeatureExtraction -f video1.avi -outroot ./output/ -of test.csv -no2Dfp -no3Dfp -noMparams -noAUs -noPose -q

Any suggestion/approach are welcome.

Thanks