I recently decided to use Open Sesame along with our Tobii TX-300 Eyetracker to perform a antisaccade paradigm. So far, everything works fine. I just have one major issue I need to resolve:
The output-file from Tobii only contains gaze-points on the display and pupil size for left and right pupil. To calculate reaction times from the central cue to the target cue, I would also need the eye-position from each participant in the room (x, y, z coordinates) - I couldn't find any possibility to change the output. Is there a solution for this?
Hi guys,
I recently decided to use Open Sesame along with our Tobii TX-300 Eyetracker to perform a antisaccade paradigm. So far, everything works fine. I just have one major issue I need to resolve: The output-file from Tobii only contains gaze-points on the display and pupil size for left and right pupil. To calculate reaction times from the central cue to the target cue, I would also need the eye-position from each participant in the room (x, y, z coordinates) - I couldn't find any possibility to change the output. Is there a solution for this?
Thanks for your help in advance, Alex