ccp-eva / eyewit

👁‍🗨 Bootstrapping common eye-tracking tasks
MIT License
7 stars 3 forks source link

⭐ Implement First Look Duration #33

Closed kalaschnik closed 2 years ago

kalaschnik commented 2 years ago

So far we can get the number of gaze shifts and the value of the hitname for the first look aoi (e.g., "top"), yet we are interested in the duration of the first look.

use last_hit_name to collect the same looking aois

from the preregestration:

(B) FIRST LOOK: Second, we are planning to repeat our main analysis based on the duration of infants’ first look at the object before any looks away. This is in line with the main analysis by Yoon et al. (2008), who “chose to measure duration of first looks rather than total looking times because when an infant looks away from the screen, he/she has no evidence of the continuing existence of the object.” (Quotation from Yoon et al., 2008). To increase comparability with the manual coding procedure by Yoon et al., we will define a look at the object as the time interval between the first fixation in the screen AOI and the end of the last fixation within the same AOI, including the duration of saccades between fixations. The first look ends when a gaze sample with coordinates outside the object AOI is detected or when the latency between two consecutive object fixations is more than 3 SDs longer than the median of a child’s gaze shift latency within the object AOI during all object looks over all outcome trials (assuming that the child must have looked away from the screen in this case).

Note: Due to the restrictions resulting from the eye tracking based measure of looking times, our first look measure deviates from the measure by Yoon et al. (2008) in that we do not focus on the entire screen but instead on the object AOI. This is because compared to Yoon et al.’s manual coding procedure, it is difficult to determine the end timepoint of the first look in our automatic eye tracking approach (i.e., the moment when the child starts moving their gaze away from the screen). Without any video recordings of the infant’s face, it requires information about gaze samples outside the target AOI in the eye tracking data to capture that the infant’s gaze has left the screen. If we would use the entire screen as target AOI, we would reduce the availability of such no-target gaze samples, as “no-target” would be equivalent to “no-screen” gaze samples which are rarely recorded and indistinguishable from missing values due to look-aways or recording errors. In our approach, using the object AOI as target area, the remaining screen area exclusive of the object AOI provides us with a relatively bigger trackable area increasing the detection of “no-target” gaze samples. To accommodate for the remaining risk that the first look may end with a look-away from the screen without the outwards moving saccade being detected by the eye tracker, we have decided for an additional time criterion accounting for an individual’s saccade speed during their looks at the object. This criterion relied on the median rather than the mean latency, as the median latency is less affected by outliers (e.g., caused by missing values) and therefore more robust compared to the mean latency.

kalaschnik commented 2 years ago

closed by 1cc4ffc