treyescan / dynamic-aoi-toolkit

A toolkit for widescreen dynamic areas of interest measurements using the Pupil Labs Code eye tracker.
GNU General Public License v3.0
5 stars 3 forks source link

DOI

Dynamic AOI Toolkit v1.2.0

This toolkit includes tools to analyse Pupil Labs Core eye tracking gaze data in relation to dynamic areas of interest (AOI) on a wide screen. The tools included are: (1) AOI selector (both automatic and manual), (2) overlay AOIs and gaze on the task video, and (3) AOI hit detection.

Table of Contents

  1. Installation
  2. Task Preparation
  3. Usage
    1. Data Structure
    2. AOI Selector
      1. Method 1: Tracking objects semi-automatically
      2. Method 2: Selection AOI
      3. Combining the AOI Selector output
    3. AOI Overlay
      1. Overlaying AOIS over a video
      2. Overlaying AOIS and gaze positions over a video
      3. Overlaying gaze positions of multiple participants and AOIs over a video
    4. AOI Hit detection
      1. Analyze metrics such as dwell time, entry time etc.
      2. Merge outputs
      3. Merge accuracy
    5. Apriltags on video
    6. Screen analysis
  4. Citation
  5. Contribution
  6. License

Installation

To use the toolkit, make sure python3 is installed. To install the latest version of this toolkit, use:

git clone git@github.com:treyescan/dynamic-aoi-toolkit.git

pip3 install -m requirements.txt

After that, make sure to copy __constants.example.py to __constants.py and change the parameters to your needs. Change the variable data_folder to point to the data folder as outlined here.

Task Preparation

In order to use this toolkit, a task video must be prepared. Videos can be created in any video dimensions, resolution and frame rate. Just make sure to change the values for total_surface_width, total_surface_height and frame_rate in __constants.py. The distance from eyes to screen: distance_to_screen and resolution of the screens: ppi should also be entered.

When preparing the task video, make sure to place apriltags on the borders of the video. border_apriltags.py can be used for this purpose (5. Apriltags overlay on video). The appearance of these apriltags marks the beginning of the task as the dummy surface in Pupil Capture. This should be defined in Pupil Capture.

Screen surfaces should also be defined in Pupil Labs Capture. The number of surfaces and the x-coordinate bounds of the surfaces can be entered in __constants.py. This information is necessary when combining the surface files to one gaze position file in AOI Hit detection.

Finally, we decided to put an apriltag in between each scene to track the synchronization. This should be a unique apriltag not used as one of the border apriltags. The surface on this apriltag can also be defined within Pupil Labs Capture. Make sure to note the beginning and ending frame number of appearance in data/videos/start_end_frames/synchronization/task1.json

Usage

1. Data structure (data folder)

2. AOI Selector

The AOI Selector allows the user to define dynamic AOIs. This can be done semi-automatically or manually. Both methods can be used simultaneously, after which the data files can be combined. We can check the data files by overlaying the csv files over a video in the AOI overlay tool.

Method 1: Tracking objects semi-automatically

cd tools/AOI-selection/
python3 aoi_tracking.py --video="video.mp4" --start_frame=100

Usage:

  1. Run the command above, replacing video.mp4 with the path to your video.
  2. A few questions are asked, such as label of the tracked object and category. These are easily customizable in the script.
  3. The video will open a preview screen.
  4. If you want to select an object to track from the first frame, draw a box on the video.
  5. If not: hit [enter] to play the video, hit [s] when you want to select an object.
  6. The video starts playing and shows the tracked object. In this state, the results are directly saved to your output csv
  7. When you're done, stop the script by hitting [q].
  8. Each AOI is outputted in a csv file with x and y coordinates of the AOI box in every frame. Additional information is given alongside each AOI, such as label and category.

Method 2: Selection AOI

cd tools/AOI-selection/
# use this to select  frames and let the script interpolate the frames in between
python3 aoi_selection.py --video="video.mp4" --start_frame=100

# use this to select each frame manually
python3 aoi_selection.py --video="video.mp4" --start_frame=100 --manual

Usage:

  1. Run the command above, replacing video.mp4 with the path to your video.
  2. A few questions are asked, such as label of the tracked object and category. These are easily customizable in the script.
  3. The video will open a preview screen.
  4. If you want to select a AOI from the first frame.
  5. If not: hint [enter] to play the video, hit [s] when you want to select a AOI.
  6. The video starts playing without showing the AOI. when you want to select a new AOI, hit [s].
  7. When you're done, stop the script by hitting [q].
  8. The script will print the selected bounding boxes to the console and calculate the coordinates of the AOI in between.
  9. The script will show you the computed AOI's by showing the video again and save it to the output file.
  10. Every AOI is outputted in a csv file with x and y coordinates of the AOI box in every frame. Additional information is outputted alongside each AOI, such as label and category.

Combining the AOI Selector output

cd tools/AOI-selection/
python3 concat_files.py --folder data/testvideo

Usage:

  1. Make sure all output files from script 1 and 2 are saved in one folder.
  2. Run the command above, replacing data/testvideo with the path to your output folder.
  3. The files will be concatenated to a single file (combined_data/dataset.csv). The console will show you the path of this file.

3. AOI Overlay

In AOI overlay, 3 tools are presented in order to view selected AOIs and gaze positions. The scripts overlay each frame of the task with information, depending on the chosen tool. Options include: only AOIs, AOIs + gaze of one participant and AOIs + gaze data of all available participants.

Overlaying AOIS over a video

cd tools/overlay/
python3 overlay_only_aois.py --video="video.mp4" --aois="aois.csv" --start_frame=1000

Usage:

  1. Run the command above.
  2. The video will be outputted to video_with_labels.mp4 in the same folder.
  3. Make sure to move this video before creating a new video.
    1. NB: video processing make take a while since every frame has to be processed at full resolution.

Overlaying AOIs and gaze positions over a video

The single particpant overlay script generates a video based on the video of the task. The gaze positions ({particpant folder}/gp.csv) and AOI's will be overlayed, as well as an indicator whether or not the hazard button is pressed. For this, the PupilLabs annotations are used ({particpant folder}/annotations.csv).

# for one participant
cd tools/overlay/
python3 overlay_single_participant.py --video="video.mp4" --aois="aois.csv" --participant="{folder to participant}" --start_frame=1000

Usage:

  1. Run the command above.
  2. The video will be outputted to video_with_labels_and_gaze.mp4 in the same folder.
  3. Make sure to move this video before creating a new video.
    1. NB: video processing make take a while since every frame has to be processed at full resolution.

Overlaying gaze positions of multiple participants and AOIs over a video

# for multiple participants
cd tools/overlay/
python3 overlay_multiple_participants.py --video="video.mp4" --aois="aois.csv" --t="{folder of participants}" --m="T1" --groupcolors --ellipse
Optional params Description
--start_frame=1000 When set, the video will start exporting from this frame.
--ellipse When set, an ellipse will be drawn around the gaze points of all participants. The center x and y are the mean x and y of all gaze points, the axes length are the standard deviation. The orientation is determined by calculting the angle of the largest eigenvector.
--groupcolors When set, participants will be color grouped to glaucoma/control groups.

Usage:

  1. all gp.csv in {folder of participants} are fetched (last one).
  2. output: video_with_multiple_gp.mp4
    1. NB: video processing make take a while since every frame has to be processed at full resolution.

4. AOI Hit detection

AOI hit detection provides a tool to calculate measures, such as dwell time and entry time. For every gaze position, the corresponding frame is checked for an AOI hit within the AOIs as defined by the AOI selectors. With merge_outputs.py the lastly generated output file of each participant is merged into one output file for statistical analysis purposes.

We may manually add a batch_id to distinguish between different runs.

cd hit-detection
python3 analyse.py --p P-006 --mm T1 --t Deel1 --st 1 --id {batch_id}

# to see what arguments we may provide
python3 analyse.py -h

# to run multi analysis on all P-* and all T* and all Tasks
# optional: provide the starting task for all analyses
python3 better-multi-analyse.py --st 1

# NB: the multi-analyse.py script can be used (which is slower – not multi threaded) when the GUI can't be opened

Usage:

  1. Put the data in the appropriate data folder (see Data Structure).
  2. Make sure all other files are in place:
    1. data/videos/synchronization/task.json
  3. Please check __constants.py for variables that can be adjusted to fit own research needs, such as confidence_threshold, minimal_threshold_entry_exit, minimal_threshold_dwell etc.

Parameters:

Parameter Unit Description
confidence_threshold - Pupil Labs provides a quality assessment of the pupil detection for every sample, as a "confidence" value between 0.0 (pupil could not be detected) and 1.0 (pupil was detected with very high certainty). Values below this threshold are marked as gap samples.
valid_gap_threshold s Threshold for gaps to be filled in by linear interpolation. Gaps longer than this threshold remain gap samples.
add_gap_samples s The samples around a gap to be considered as additional gap samples, where the pupil of the eye may be partially occluded.
error_angle ° Margin that is added around AOIs in degrees.
minimal_angle_of_aoi ° A margin is added if AOIs are smaller than the minimal_angle_of_aoi, after that the margin of angle_a is added.
minimal_threshold_entry_exit s If the time between an AOI entry and AOI exit is shorter than this threshold, these visits are combined as one visit.
minimal_threshold_dwell s When the dwell duration is below this threshold, it will not be considered in total_dwell_time.

Merge outputs

cd hit-detection
python3 merge_outputs.py --id={batch_id}

Usage:

  1. Make sure each participant folder has the file to be merged, as the newest output file in the folder.

Merge accuracy

The hit detection outputs accuracy files for each participant. An aggregate merge script is provided to facilitate easier processing in statistical software (e.g. SPSS).

cd hit-detection
python3 merge_accuracy.py --id={batch_id}

Usage:

  1. Make sure each participant folder has the file to be merged, as the newest output file in the folder.

5. Apriltags on video

This part of the TREYESCAN toolkit places apriltags at the borders of the task video.

6. Screen analysis

cd screen-regions
python3 analyse.py

Place apriltags

cd tools/apriltags
python3 border_apriltags.py --name="../videos/vid.mp4" --cols=8 --rows=2 --default-scale=3
python3 border_apriltags.py --name="../videos/vid.mp4" --cols=8 --rows=2 --default-scale=3 --large-scale=4 --large-scale-indices=0,5,6,11,12,13,14,15

Usage:

3. Citation

Faraji, Y., & van Rijn, J. W. (2024). Dynamic AOI Toolkit v1.2.0 (v1.2.0). Zenodo. https://doi.org/10.5281/zenodo.10535707

4. Contribution

Issues and other contributions are welcome.

5. License

This toolkit is licensed under GNU GENERAL PUBLIC LICENSE V3