jodeleeuw / 219-2021-eyetracking-analysis

2 stars 0 forks source link

Compare replicability to calibration success #2

Open jkhartshorne opened 3 years ago

jkhartshorne commented 3 years ago

From @jodeleeuw

For calibration quality, there are a few ways you could go about it.

As a preliminary step filtering the data to trial_type == 'webgazer-validate' should get you a data frame with just the validation trials. We ran a validation trial immediately after calibration. The trial involves showing some set of dots on the screen for a few seconds each and recording all of the samples from webgazer for each dot. Each experiment defined a different set of dots for validation, depending on where the relevant ROIs were.

The data for each of these trials includes columns for:

raw_gaze: all the raw data points recorded from webgazer, organized by which point was shown on the screen percent_in_roi: The jsPsych plugin also calculates the percentage of samples for each dot that are within some specified radius. I think we used 150px for all of the experiments, but I can verify that. average_offset: The average x,y offset of the cloud of points measured for each validation point. Also includes a value r, which is the radius of the circle centered at the midpoint of the cloud that includes half the points. Basically a measure of variance. validation_points: coordinates of each of the points on the screen, in the same order as the other columns so that you could see, e.g., if calibration tends to be better on more central points.

We also included a step where if the validation was poor after the first attempt we repeated the calibration and validation procedure one additional time. So some subjects have more validation trials than others. If you look at my R Notebook for group A (just pushed to GH now) you can see some initial attempts to extract validation info and use it to exclude subjects.