dmirman / gazer

Functions for reading and pre-processing eye tracking data.
43 stars 11 forks source link

assign_aoi function for non-corner AOIs #4

Closed simOne3107 closed 3 years ago

simOne3107 commented 3 years ago

Hi, I am a beginner in R and in eyetracking data analysis. My stimuli are shown in triangle formation on the screen. I was wondering how I exactly I can add the coordinates to the aoi_loc argument in such cases. In your vignette, it says that "each AOI location should be a separate row in a data frame that has variables xmin, xmax, ymin, and ymax" but I am not sure how that information should be inputted in the code. Also, I am using a webcam-based eyetracker and the coordinates will be different for each participant.

Here are the coordinates for my stimuli: top --> x = 653, y = 683, width = 230, height = 172 left --> x = 204, y = 9, width = 230, height = 172 right --> x = 1102, y = 9, width = 230, height = 172

Data yielded from one of my participants in a pilot study outputted the following:

top --> x_normalized = 0.400173611, y_normalized = 0.790509259, width_normalized = 0.199652778, height_normalized = 0.199074074 left --> x_normalized = 0.010416667, y_normalized = 0.010416667, width_normalized = 0.199652778, height_normalized = 0.199074074 right --> x_normalized =0.789930556, y_normalized = 0.010416667, width_normalized = 0.199652778, height_normalized = 0.199074074

Thnak you so much! I would appreciate if you could post an example.

jgeller112 commented 3 years ago

@dmirman any suggestions?

jgeller112 commented 3 years ago

You can get xmin, xmax, ymin, ymax with the following:

xMin = x yMin = y xMax = x + width yMax = y + height

You can then examine whether gaze falls within that AOI (top, right, left)

dmirman commented 3 years ago

Do those x and y coordinates correspond to a corner or to the center of the stimulus? For simplicity, assuming they correspond to the bottom left corner of the stimulus, the arithmetic would be as @jgeller112 described and something like this should work for the stimulus positions:

aoi_loc <- data.frame(loc = c("top", "left", "right"),
                         xmin = c(653, 204, 1102),
                         xmax = c(653+230, 204+230, 1102+230),
                         ymin = c(683, 9, 9),
                         ymax = c(683+172, 9+172, 9+172))

I've never analysed data from a webcam eyetracker, so I'm not sure about the peculariaties you might encounter, but you can calculate those values on the fly using the participant-specific normalized coordinates. Again, assuming x and y specify the bottom left corner and using your pilot participant example:

aoi_loc <- data.frame(loc = c("top", "left", "right"),
           x_normalized = c(0.400173611, 0.010416667, 0.789930556),
           y_normalized = c(0.790509259, 0.010416667, 0.010416667),
           width_normalized = rep(0.199652778, 3),
           height_normalized = rep(0.199074074, 3)) %>% 
  mutate(xmin = x_normalized, ymin = y_normalized,
         xmax = x_normalized+width_normalized,
         ymax = y_normalized+height_normalized)

Then, when you call assign_aoi , you'd need to specify screen_size=c(1, 1) because the coordinates are normalized. And you'd need to specify AOI sizes as a proportion of the screen, for example aoi_size=c(0.2, 0.2).

jgeller112 commented 3 years ago

I did some investigating and here is what the Gorilla website says:

We also give the exact same data, but in what we call 'normalised' space. The main issue with the raw data is that you cannot compare two participants who are using differently sized screens, so we also normalise coordinates into a unified space. The way the Gorilla layout engine works is that we lay everything out in a frame which is always in a 4:3 ratio, and then we try and make that frame as big as possible. The normalised coordinates are then relative to this frame, where 0,0 is the bottom left of the frame and 1,1 is the top right of the frame. The normalised coordinates are comparable between different participants - 0.5,0.5 will always be the centre of the screen, regardless of how big the screen is.

jgeller112 commented 3 years ago

I would also note that Gorilla has this warning about detecting fixations:

The current zone does not provide data which you can reliably detect fixations, saccades, scan paths and blinks with. Instead it provides estimates of gaze locations, with an associated confidence -- these can be used to create heatmaps of images, or percentage occupancy of areas of interest.

simOne3107 commented 3 years ago

this is really helpful! thank you guys!!!