treyescan / dynamic-aoi-toolkit

A toolkit for widescreen dynamic areas of interest measurements using the Pupil Labs Code eye tracker.
GNU General Public License v3.0
4 stars 3 forks source link

Unable to run the project #3

Open kudurupaka-vamshi-krishna opened 1 year ago

kudurupaka-vamshi-krishna commented 1 year ago

Hello @joriswvanrijn and @YasminFaraji,

  1. With the GitHub repository, we successfully completed AOI selection and combining AOI selector output tasks. But when we overlay AOIs and gaze positions over a video, we cannot proceed further as it needs a "gp.csv" file. Unfortunately, we could not figure out from the paper and from the GitHub repository what the "gp.csv" file should contain and how the gaze data obtained from pupil labs should be formatted in the "gp.csv" file. So, it would be helpful if you could provide us with some information.

  2. However, upon digging further into the project scripts we found scripts to get "gp.csv" file but we are unable to run it. Can you provide us with the procedure?

  3. Moreover, for making an AOI selection, do we need to use the scene video obtained from the pupil labs eye tracker?

YasminFaraji commented 1 year ago

Hi @kudurupaka-vamshi-krishna,

Thanks for your interest in our dynamic-aoi-toolkit.

  1. & 2. I am glad that you successfully completed AOI selection and combining the files. In the flow-chart provided in the readme (https://github.com/treyescan/dynamic-aoi-toolkit#4-aoi-hit-detection), you can see how gp.csv is generated using analyse.py. However it could be the case that for your setup you should tweak the code to get to the desired gp.csv. Basically gp.csv contains the pooled surface files generated by pupillabs, with interpolated gaps (<75ms), relevant gaps to NaN values (>75ms +/-100ms), and linear interpolated gaze time stamps. The headers of a properly generated gp.csv should be as follows: ,t,x,y,frame (where the first column has no name but is the ID as indicated by pandas.
  2. For the AOI selection we used the scene video files that we show to the participants in the task on the screens. The video from the pupillabs eye tracker will be different for each participant. If you want to use the pupil labs video you will have to do AOI selection on an individual participant level, which requires a different approach that is not suitable to perform with the dynamic aoi toolkit.

Please let me know if you have other questions. Yasmin