Closed JMankewitz closed 1 year ago
I think the time sampling between the computers loops and the actual eyetracking sampling are different. Right now our look smoothing is supposed to be 150ms with 9 loops, but when i count the total loops, it gets into the 100,000s (which would imply a trial length of like 5 minutes...which this is not).
I instead added some more logic that tracks whether the switch is a switch to off or a switch to none. If the switch is a switch to off, I'm adding a counter that only triggers if the off switch is more than 10000 mystery timepoints. If the switch is to none or the other image, it only needs to stay switched for 10 timepoints.
This seems to have worked! It is very slightly less response, but not perceptibly so imo, especially with these wiggly babies.
I'm closing for now, but will revisit after the pilot to see if I need to decrease the away threshold.
Update! This is actually much tricker than expected. This code was written under the assumption that each python loop was limited by the eyetracker sampling rate (every 16.66ms). This seems to be true based on the Tobii output when there is a successful track, but there was still strange behavior during strings of off looks where the space between timebins was more like .1ms instead of 16ms (see https://github.com/JMankewitz/baby-info/blob/experiment-dev/BabyInfo_v1/eyetrackingData/testdebug12_TOBII_output.tsv).
The code I inherited from Martin assumes that each loop in the python script is 16.66ms and uses this assumption to build the thresholds for triggers. I added code to write to a csv for each loop in the python script, and it seems that the script actually loops every ~.1ms! So the loops are set to the refresh rate of the computer clock, not set to the sampling rate of the eyetracker.
Now, I'm going to refactor the code so that it uses the amount of time that has passed via the internal clock time to determine contingent thresholds. This might be a problem with latency, but I won't be able to actually estimate how many loops to expect in the computer so I think this is better.
Refactored the code with 78beba56e2cf30928b9bcd84ee8f746ffc13bec3 , now uses libtime to mark the last "start" time for each AOI. At each timepoint it tracks the diff between curtime and the recorded audio start time.
(actually f3c0e42b756ee83daa9777197e66fbade5afa0e8)
I thought because we were dropping "off" looks from the average gaze smoothing, blinks and mis-tracks wouldn't break the contingent code https://github.com/JMankewitz/baby-info/blob/1f590bffd5408e4bb852b9d15a1c211b17b0f2d4/BabyInfo_v1/BabyInfo_v1.py#LL596C13-L596C13
However, when testing with Dasha, it restarted the audio when Dasha blinked. This is not expected behavior and I should double check the "off" logic during the contingent phase.
I may only need to add some "If off for >n timepoints, then shift if not keep playing" logic.