This toolkit includes tools to analyse Pupil Labs Core eye tracking gaze data in relation to dynamic areas of interest (AOI) on a wide screen. The tools included are: (1) AOI selector (both automatic and manual), (2) overlay AOIs and gaze on the task video, and (3) AOI hit detection.
To use the toolkit, make sure python3 is installed. To install the latest version of this toolkit, use:
git clone git@github.com:treyescan/dynamic-aoi-toolkit.git
pip3 install -m requirements.txt
After that, make sure to copy __constants.example.py
to __constants.py
and change the parameters to your needs. Change the variable data_folder
to point to the data folder as outlined here.
In order to use this toolkit, a task video must be prepared. Videos can be created in any video dimensions, resolution and frame rate. Just make sure to change the values for total_surface_width
, total_surface_height
and frame_rate
in __constants.py. The distance from eyes to screen: distance_to_screen
and resolution of the screens: ppi
should also be entered.
When preparing the task video, make sure to place apriltags on the borders of the video. border_apriltags.py
can be used for this purpose (5. Apriltags overlay on video). The appearance of these apriltags marks the beginning of the task as the dummy surface in Pupil Capture. This should be defined in Pupil Capture.
Screen surfaces should also be defined in Pupil Labs Capture. The number of surfaces and the x-coordinate bounds of the surfaces can be entered in __constants.py
. This information is necessary when combining the surface files to one gaze position file in AOI Hit detection.
Finally, we decided to put an apriltag in between each scene to track the synchronization. This should be a unique apriltag not used as one of the border apriltags. The surface on this apriltag can also be defined within Pupil Labs Capture. Make sure to note the beginning and ending frame number of appearance in data/videos/start_end_frames/synchronization/task1.json
data/
input-aoi/
– all files related to the AOIsinput-gp/
– all files related to input data from Pupil Labsoutput/
– all output filesvideos/
– all videosThe AOI Selector allows the user to define dynamic AOIs. This can be done semi-automatically or manually. Both methods can be used simultaneously, after which the data files can be combined. We can check the data files by overlaying the csv files over a video in the AOI overlay tool.
cd tools/AOI-selection/
python3 aoi_tracking.py --video="video.mp4" --start_frame=100
Usage:
video.mp4
with the path to your video.[enter]
to play the video, hit [s]
when you want to select an object.[q]
.cd tools/AOI-selection/
# use this to select frames and let the script interpolate the frames in between
python3 aoi_selection.py --video="video.mp4" --start_frame=100
# use this to select each frame manually
python3 aoi_selection.py --video="video.mp4" --start_frame=100 --manual
Usage:
video.mp4
with the path to your video.[enter]
to play the video, hit [s]
when you want to select a AOI.[s]
.[q]
.cd tools/AOI-selection/
python3 concat_files.py --folder data/testvideo
Usage:
data/testvideo
with the path to your output folder.combined_data/dataset.csv
). The console will show you the path of this file.In AOI overlay, 3 tools are presented in order to view selected AOIs and gaze positions. The scripts overlay each frame of the task with information, depending on the chosen tool. Options include: only AOIs, AOIs + gaze of one participant and AOIs + gaze data of all available participants.
cd tools/overlay/
python3 overlay_only_aois.py --video="video.mp4" --aois="aois.csv" --start_frame=1000
Usage:
video_with_labels.mp4
in the same folder.The single particpant overlay script generates a video based on the video of the task. The gaze positions ({particpant folder}/gp.csv
) and AOI's will be overlayed, as well as an indicator whether or not the hazard button is pressed. For this, the PupilLabs annotations are used ({particpant folder}/annotations.csv
).
# for one participant
cd tools/overlay/
python3 overlay_single_participant.py --video="video.mp4" --aois="aois.csv" --participant="{folder to participant}" --start_frame=1000
Usage:
video_with_labels_and_gaze.mp4
in the same folder.# for multiple participants
cd tools/overlay/
python3 overlay_multiple_participants.py --video="video.mp4" --aois="aois.csv" --t="{folder of participants}" --m="T1" --groupcolors --ellipse
Optional params | Description |
---|---|
--start_frame=1000 | When set, the video will start exporting from this frame. |
--ellipse | When set, an ellipse will be drawn around the gaze points of all participants. The center x and y are the mean x and y of all gaze points, the axes length are the standard deviation. The orientation is determined by calculting the angle of the largest eigenvector. |
--groupcolors | When set, participants will be color grouped to glaucoma/control groups. |
Usage:
gp.csv
in {folder of participants} are fetched (last one).video_with_multiple_gp.mp4
AOI hit detection provides a tool to calculate measures, such as dwell time and entry time. For every gaze position, the corresponding frame is checked for an AOI hit within the AOIs as defined by the AOI selectors. With merge_outputs.py
the lastly generated output file of each participant is merged into one output file for statistical analysis purposes.
We may manually add a batch_id
to distinguish between different runs.
cd hit-detection
python3 analyse.py --p P-006 --mm T1 --t Deel1 --st 1 --id {batch_id}
# to see what arguments we may provide
python3 analyse.py -h
# to run multi analysis on all P-* and all T* and all Tasks
# optional: provide the starting task for all analyses
python3 better-multi-analyse.py --st 1
# NB: the multi-analyse.py script can be used (which is slower – not multi threaded) when the GUI can't be opened
Usage:
confidence_threshold
, minimal_threshold_entry_exit
, minimal_threshold_dwell
etc.Parameters:
Parameter | Unit | Description |
---|---|---|
confidence_threshold |
- | Pupil Labs provides a quality assessment of the pupil detection for every sample, as a "confidence" value between 0.0 (pupil could not be detected) and 1.0 (pupil was detected with very high certainty). Values below this threshold are marked as gap samples. |
valid_gap_threshold |
s | Threshold for gaps to be filled in by linear interpolation. Gaps longer than this threshold remain gap samples. |
add_gap_samples |
s | The samples around a gap to be considered as additional gap samples, where the pupil of the eye may be partially occluded. |
error_angle |
° | Margin that is added around AOIs in degrees. |
minimal_angle_of_aoi |
° | A margin is added if AOIs are smaller than the minimal_angle_of_aoi , after that the margin of angle_a is added. |
minimal_threshold_entry_exit |
s | If the time between an AOI entry and AOI exit is shorter than this threshold, these visits are combined as one visit. |
minimal_threshold_dwell |
s | When the dwell duration is below this threshold, it will not be considered in total_dwell_time. |
cd hit-detection
python3 merge_outputs.py --id={batch_id}
Usage:
The hit detection outputs accuracy files for each participant. An aggregate merge script is provided to facilitate easier processing in statistical software (e.g. SPSS).
cd hit-detection
python3 merge_accuracy.py --id={batch_id}
Usage:
This part of the TREYESCAN toolkit places apriltags at the borders of the task video.
cd screen-regions
python3 analyse.py
cd tools/apriltags
python3 border_apriltags.py --name="../videos/vid.mp4" --cols=8 --rows=2 --default-scale=3
python3 border_apriltags.py --name="../videos/vid.mp4" --cols=8 --rows=2 --default-scale=3 --large-scale=4 --large-scale-indices=0,5,6,11,12,13,14,15
Usage:
border_apriltags.py
can be run with different arguments:
/output
a video with apriltags and a png file with apriltag locations will be provided.Faraji, Y., & van Rijn, J. W. (2024). Dynamic AOI Toolkit v1.2.0 (v1.2.0). Zenodo. https://doi.org/10.5281/zenodo.10535707
10.5281/zenodo.7019196
10.5281/zenodo.8029272
10.5281/zenodo.10535707
Issues and other contributions are welcome.
This toolkit is licensed under GNU GENERAL PUBLIC LICENSE V3