isolver / ioHub

A Python program, running as an independent process, that provides a 'proxy like' service for experiment runtimes ( psychopy ) and devices ( keyboard, mouse, parallel port, eye tracker, ... ).
19 stars 14 forks source link

Implement eye tracker HW independent calibration accuracy feedback graphics #48

Open isolver opened 11 years ago

isolver commented 11 years ago

Viewed after a calibration, optionally. Save to image and store with session data files as well?

garyfeng commented 11 years ago

YES, please.

We are also trying to implement real-time gaze viewing on the 2nd monitor, as a way to monitor calibration drifts overtime. Would appreciate any suggestions.

isolver commented 10 years ago

Switching to support for any iohub eyetracker interface.

Perhaps can be done by supplying a 'validation' procedure that any ET can run; iohub collects samples during validation and then plots validation points with N samples from a window of time after each target was displayed.

Also display calculated min / max / average error in pixels and degrees.

garyfeng commented 10 years ago

Sounds good. Also a way to accept/reject the calibration based on the validation result? I can probably add a function to use a webcam and face tracking for head position feedback.

On Tuesday, February 11, 2014, Sol Simpson notifications@github.com wrote:

Switching to support for any iohub eyetracker interface.

Perhaps can be done by supplying a 'validation' procedure that any ET can run; iohub collects samples during validation and then plots validation points with N samples from a window of time after each target was displayed.

Also display calculated min / max / average error in pixels and degrees.

— Reply to this email directly or view it on GitHubhttps://github.com/isolver/ioHub/issues/48#issuecomment-34754012 .

-- gary

isolver commented 10 years ago

Yes, good point. Validation stage can be (optionally) done after an eye tracker calibrates. User could then press one key to exit the setup routine, or a different key to rerun the calibration / validation process again.

On Tue, Feb 11, 2014 at 8:37 AM, garyfeng notifications@github.com wrote:

Sounds good. Also a way to accept/reject the calibration based on the validation result? I can probably add a function to use a webcam and face tracking for head position feedback.

  • gary

On Tuesday, February 11, 2014, Sol Simpson notifications@github.com wrote:

Switching to support for any iohub eyetracker interface.

Perhaps can be done by supplying a 'validation' procedure that any ET can run; iohub collects samples during validation and then plots validation points with N samples from a window of time after each target was displayed.

Also display calculated min / max / average error in pixels and degrees.

— Reply to this email directly or view it on GitHub< https://github.com/isolver/ioHub/issues/48#issuecomment-34754012> .

-- gary

— Reply to this email directly or view it on GitHubhttps://github.com/isolver/ioHub/issues/48#issuecomment-34754429 .