dcnieho / Titta

Matlab and PsychToolbox interface to Tobii eye trackers using Tobii Pro SDK
Other
42 stars 15 forks source link

Calibrating Monkeys + giving rewards #7

Closed iandol closed 5 years ago

iandol commented 5 years ago

Hi, we normally always give a reward when an animal fixates (trigger an arduino to send a TTL to a juice pump), and therefore it is good to give a reward when the animal looks at a calibration / fixation point. I wonder what is the best way to allow Titta's calibrate function to optionally run a function? I can create a @()function handle to pass to settings entry, and if that setting is not empty then Titta can run the handle after that calibration point is collected?

dcnieho commented 5 years ago

Hi Ian,

Do you mean you would like a function to run upon each accepted calibration or validation point? (Note fixation isn't checked for validation)?

I could add optional callbacks the user can provide to be called at those moments (cal point accept and val point accept), would that do the trick for you?

On Thu, 15 Aug 2019, 07:07 Ian Max Andolina notifications@github.com wrote:

Hi, we normally always give a reward when an animal fixates (trigger an arduino to send a TTL to a juice pump), and therefore it is good to give a reward when the animal looks at a calibration / fixation point. I wonder what is the best way to allow Titta's calibrate function to optionally run a function? I can create a @()function handle to pass to settings entry, and if that setting is not empty then Titta can run the handle after that calibration point is collected?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/dcnieho/Titta/issues/7?email_source=notifications&email_token=AANUOGP7BBIGXMADBCEROIDQETP7TA5CNFSM4IL3D2Z2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HFLNLWQ, or mute the thread https://github.com/notifications/unsubscribe-auth/AANUOGI2VYVACY5RJOXEEA3QETP7TANCNFSM4IL3D2ZQ .

iandol commented 5 years ago

Exactly, a callback would be perfect. The general idea for monkeys is you present each point manually and when you are sure they are looking then reward them and move to the next point (autoPace=0 is the mode for that.

Another very nice and useful feature for monkeys/babies (Eyelink has in theory but it is unreliable), is when you are in manual calibration, you can "go back" and recalibrate the point again by pressing [delete] or [backspace]. Currently in Titta, you can only go forwards one point using [space], or [esc] to go back and start all over again. Not sure how easy this would be to implement?

dcnieho commented 5 years ago

Hi Ian,

Ok, callbacks upcoming. The backspace point is interesting, should be easy enough to implement.

Have you seen the two-screen mode of Titta (readmeTwoScreens.m). See also documentation of calibrate() in the readme docs. Feedback on that is very welcome, especially regarding whether info shown during setup and calibration is sufficient and/or there is something i could consider adding.

On Thu, Aug 15, 2019 at 9:55 AM Ian Max Andolina notifications@github.com wrote:

Exactly, a callback would be perfect. The general idea for monkeys is you present each point manually and when you are sure they are looking then reward them and move to the next point (autoPace=0 is the mode for that.

Another very nice and useful feature for monkeys/babies (Eyelink has in theory but it is unreliable), is when you are in manual calibration, you can "go back" and recalibrate the point again by pressing [delete] or [backspace]. Currently in Titta, you can only go forwards one point using [space], or [esc] to go back and start all over again. Not sure how easy this would be to implement?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/dcnieho/Titta/issues/7?email_source=notifications&email_token=AANUOGPCBNONOQA4N4LGQMTQEUDW5A5CNFSM4IL3D2Z2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4LD55A#issuecomment-521551604, or mute the thread https://github.com/notifications/unsubscribe-auth/AANUOGOEDJ2Q3JQZ6DXKD7LQEUDW5ANCNFSM4IL3D2ZQ .

dcnieho commented 5 years ago

Ok, i have implemented the callback. You can get whether the Tobii SDK reports success for data collection for the point. To my knowledge, this only fails if the eyes aren't tracked, it'll happily succeed when you are looking at completely the wrong location (how could it know when still calibrating..). So it might still be hard to give feedback/reward for looking at the point, instead you may end up giving feedback for keeping the eye open.

You could try to look into (peek) the gaze collected during the calibration, but that might not be all that meaningful either, uncalibrated as it is. For the Spectrum, also expect to see strong oscillations in it, creating by the Spectrum trying out dark and bright pupil mode by alternating between them rapidly.

Lets leave this issue open while i look at the redo last point functionality. I think i'll implement that only for autoPace=0, only mode in which it makes sense (is manageable by the operator)

dcnieho commented 5 years ago

Hi Ian,

How exactly would the backspace functionality work when autopace==0? Would it be that instead of pressing spacebar to accept a point, you would press backspace to redo it? Thinking about that further, that would not be necessary with Titta, as calibration data collection is only triggered upon spacebar when autopace==0, so you can wait as long as you need until fixation is correct and only then press space to collect gaze data for calibration.

Or would you like for the sequence to continue to the next point and for the skipped (by means of backspace) point to appear later in the sequence again? I could see some use for the latter (visual cue triggering later refixation), and could implement that very easily if you want.

Cheers, Dee

On Thu, Aug 15, 2019 at 1:34 PM Diederick C. Niehorster dcnieho@gmail.com wrote:

Hi Ian,

Ok, callbacks upcoming. The backspace point is interesting, should be easy enough to implement.

Have you seen the two-screen mode of Titta (readmeTwoScreens.m). See also documentation of calibrate() in the readme docs. Feedback on that is very welcome, especially regarding whether info shown during setup and calibration is sufficient and/or there is something i could consider adding.

On Thu, Aug 15, 2019 at 9:55 AM Ian Max Andolina notifications@github.com wrote:

Exactly, a callback would be perfect. The general idea for monkeys is you present each point manually and when you are sure they are looking then reward them and move to the next point (autoPace=0 is the mode for that.

Another very nice and useful feature for monkeys/babies (Eyelink has in theory but it is unreliable), is when you are in manual calibration, you can "go back" and recalibrate the point again by pressing [delete] or [backspace]. Currently in Titta, you can only go forwards one point using [space], or [esc] to go back and start all over again. Not sure how easy this would be to implement?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/dcnieho/Titta/issues/7?email_source=notifications&email_token=AANUOGPCBNONOQA4N4LGQMTQEUDW5A5CNFSM4IL3D2Z2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4LD55A#issuecomment-521551604, or mute the thread https://github.com/notifications/unsubscribe-auth/AANUOGOEDJ2Q3JQZ6DXKD7LQEUDW5ANCNFSM4IL3D2ZQ .

iandol commented 5 years ago

So it might still be hard to give feedback/reward for looking at the point, instead you may end up giving feedback for keeping the eye open.

Yes, this is a challenge for human babies and non-human primates. You cannot verbally communicate the task, only positively reinforce behaviour that you desire. We want to train to look at a point on the screen, but for the eyetracker to know where a subject is looking it needs a subject who already knows how to look at the screen... So often as long as the behaviour is approximate, it is "good enough". You then iterate and successively guide towards better performance. Haphazard, but it works!

You could try to look into (peek) the gaze collected during the calibration, but that might not be all that meaningful either, uncalibrated as it is. For the Spectrum, also expect to see strong oscillations in it, creating by the Spectrum trying out dark and bright pupil mode by alternating between them rapidly.

Being able to approximately view the gaze during calibration is "good enough". The eyelink, with a dedicated computer/screen makes this possible. Your dual PTB-window calibration interface is a totally brilliant alternative, I can't stress how useful this is for more challenging calibration situations. Thank you so much for thinking about it and developing it!!!

How exactly would the backspace functionality work when autopace==0? Would it be that instead of pressing spacebar to accept a point, you would press backspace to redo it? Thinking about that further, that would not be necessary with Titta, as calibration data collection is only triggered upon spacebar when autopace==0, so you can wait as long as you need until fixation is correct and only then press space to collect gaze data for calibration.

The problem with babies / monkeys is they get bored quickly. If you show a fixation point, when it first appears, it is salient, but if it stays onscreen then interest is quickly lost. One simple trick we have is to use the computer mouse cursor and jiggle it at the point to encourage a relook. So to "redo" a point ideally requires turning the fixation spot off/on before retrying. So [delete] ideally would blank the screen then reshow the spot, to try to recapture attention.

The other trick is to use animated fixation spots, you already have a nice "abstract" spot animation built-in. I want to try to see if I can add something similar to Pro Lab, where you can use small colourful animations. There are some performance constraints using Screen('PlayMovie'): I at least get lots of stuttered frames if I try to dstRect animate a movie. But I'm sure there is a solution, and perhaps just a fixed position rather than "jiggling" will be OK.

Or would you like for the sequence to continue to the next point and for the skipped (by means of backspace) point to appear later in the sequence again? I could see some use for the latter (visual cue triggering later refixation), and could implement that very easily if you want.

At least for monkeys, my feeling is that retrying the same location until "success" is better than queuing it up for a later reattempt. But honestly, I think either strategy will work, and would be happy to work with either option. Eyelink in theory had [delete] retry the fixation (and repeated [deletes] move back to redo previous positions), but often/always it just fails. The callback they use doesn't allow much customisation or configuration. Titta is already much more transparent in its operation.

dcnieho commented 5 years ago

Thanks for explaining in detail how these features are useful. The separate operator screen didn't seem that important to me, just nice to have. Glad its more than that.

I see i flaw with my suggestion of reqeueuing a skipped point at the end of the sequence: won't work if already on the last point. Implementing a "re-emphasize current point" is perhaps more useful and works for all points. I will do that by having an extra input to the custom calibration drawer, 'cmd', which can be 'next', 'draw', or 'redo'. Thats better than the current solution anyway, where someone who writes this function has to suss out from change in the other inputs that a new point is to be displayed. When you receive 'redo' (issued upon backspace), you can implement custom behavior. In my example, i'll blink the point. Stay tuned.

As for playing movies: i have no experience with that. But assuming your video is not very large or long, you could simply load all the frames into textures beforehand and go from there, that must be smooth as long as it fits in VRAM.

On Fri, Aug 16, 2019 at 2:38 AM Ian Max Andolina notifications@github.com wrote:

So it might still be hard to give feedback/reward for looking at the point, instead you may end up giving feedback for keeping the eye open.

Yes, this is a challenge for human babies and non-human primates. You cannot verbally communicate the task, only positively reinforce behaviour that you desire. We want to train to look at a point on the screen, but for the eyetracker to know where a subject is looking it needs a subject who already knows how to look at the screen... So often as long as the behaviour is approximate, it is "good enough". You then iterate and successively guide towards better performance. Haphazard, but it works!

You could try to look into (peek) the gaze collected during the calibration, but that might not be all that meaningful either, uncalibrated as it is. For the Spectrum, also expect to see strong oscillations in it, creating by the Spectrum trying out dark and bright pupil mode by alternating between them rapidly.

Being able to approximately view the gaze during calibration is "good enough". The eyelink, with a dedicated computer/screen makes this possible. Your dual PTB-window calibration interface is a totally brilliant alternative, I can't stress how useful this is for more challenging calibration situations. Thank you so much for thinking about it and developing it!!!

How exactly would the backspace functionality work when autopace==0? Would it be that instead of pressing spacebar to accept a point, you would press backspace to redo it? Thinking about that further, that would not be necessary with Titta, as calibration data collection is only triggered upon spacebar when autopace==0, so you can wait as long as you need until fixation is correct and only then press space to collect gaze data for calibration.

The problem with babies / monkeys is they get bored quickly. If you show a fixation point, when it first appears, it is salient, but if it stays onscreen then interest is quickly lost. One simple trick we have is to use the computer mouse cursor and jiggle it at the point to encourage a relook. So to "redo" a point ideally requires turning the fixation spot off/on before retrying. So [delete] ideally would blank the screen then reshow the spot, to try to recapture attention.

The other trick is to use animated fixation spots, you already have a nice "abstract" spot animation built-in. I want to try to see if I can add something similar to Pro Lab, where you can use small colourful animations. There are some performance constraints using Screen('PlayMovie'): I at least get lots of stuttered frames if I try to dstRect animate a movie. But I'm sure there is a solution, and perhaps just a fixed position rather than "jiggling" will be OK.

Or would you like for the sequence to continue to the next point and for the skipped (by means of backspace) point to appear later in the sequence again? I could see some use for the latter (visual cue triggering later refixation), and could implement that very easily if you want.

At least for monkeys, my feeling is that retrying the same location until "success" is better than queuing it up for a later reattempt. But honestly, I think either strategy will work, and would be happy to work with either option. Eyelink in theory had [delete] retry the fixation (and repeated [deletes] move back to redo previous positions), but often/always it just fails. The callback they use doesn't allow much customisation or configuration. Titta is already much more transparent in its operation.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/dcnieho/Titta/issues/7?email_source=notifications&email_token=AANUOGNMAMPD5BL32TKPZ2LQEXZJTA5CNFSM4IL3D2Z2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4NLKOQ#issuecomment-521844026, or mute the thread https://github.com/notifications/unsubscribe-auth/AANUOGNFVPLDLUZPFFY4AYLQEXZJTANCNFSM4IL3D2ZQ .

dcnieho commented 5 years ago

ok, blink-on-backspace functionality (or whatever you decide to implement in your own calibration display drawer) is now in. You can close this if you have no feedback on it :)

iandol commented 5 years ago

Hm, I can't get the pointNotifyFunction working in my own code, it just doesn't seem to get called (if I breakpoint it, I never drop into debug). I've tried to use the same function you use in readme.m with the same calling convention. Note that I first call titta with generic settings then do a titta.init(), then later use setOptions to update the settings before I call calibration. I suspect something about this order is not working. I am trying to write a wrapper so that I can use the same API I use with the Eyelink, you can see the calling code here:

https://github.com/iandol/opticka/blob/master/communication/tobiiManager.m#L791

iandol commented 5 years ago

Backspace works great by the way 😎😎😎

dcnieho commented 5 years ago

Good to hear about backspace! Hmm, I'll have to look into that issue with the callback. When debugging, is the callback simply undefined (as if wasn't updated in settings, or does it not get triggered)? Is there a simple demo or so of your code trying to use the Tobii that i could run so i can investigate (I understand correctly that it does work for my readme.m?)?

I'm off for a conference (ECEM) in about an hour, so will probably be after i am back.

In another update, I got stuff to build on Linux (turned out quite some changes were needed as MSVC wasn't doing things in a very standard-conformant way and gcc felt differently about things...), but didn't manage to load that mex file yet (https://askubuntu.com/questions/1166292/version-glibcxx-3-4-26-not-found-even-though-libstdc-so-6-recent-enough). Check out the buildWithMex if you want, the file /TobiiMex/makeTobiiMex.m and the note on using GCC 9.1 in the readme.md file of TobiiMex (you need to build this yourself, no mex file in the repo yet)

Also to be continued after ECEM. Once loaded, there is probably more problems around the corner, I believe Tobii system timestamps and PTB timestamps are not from the same clock, as they are on Windows, which will mean more work as some basic assumptions of Titta are then broken.

On Sat, Aug 17, 2019 at 3:57 PM Ian Max Andolina notifications@github.com wrote:

Backspace works great by the way

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or mute the thread.

iandol commented 5 years ago

No worries, enjoy the conference, I will try to do more detailed debugging tomorrow, and update the issue if necessary.

Now that I can easily peek at the data stream, I notice major problems with my Spectrum Pros. Compared to the Eyelink 1000, the Spectrum's raw signal is so much more noisy, my calibration errors are large and then when trying to keep within a fixation window the huge noise stops fixation from being testable. Today I tried adding a moving-median filter over # number of latest samples. Slightly better, but as i have the Eyelink and Spectrum in adjacent rooms, can test their behaviour on the same test, and the Eyelink still better. I'll try with my contact lenses tomorrow, maybe Tobii don't handle glasses properly (I just refuse to believe so much noise is normal)... Anyway, I'll probably be firing off lots of grumpy emails to Tobii support tomorrow.

dcnieho commented 5 years ago

Ok! I'd appreciate if you post back anyway, perhaps there is something i can learn and improve from it no matter what. I do see one thing i need to fix, setting these callback functions on a Titta instance (so after calling instructor, using setOptions, would simply drop the callbacks (I am using a white-listing approach for which options can still be set after init). For now, I'll add the callback to the whitelist (done, pull latest master), and later will fix this once and for all by switching to a blacklist.

We have had ours for a while (>1 yr) and it may be that you are having some bad luck with the participant. They have been really good for us for most people, similar data quality as the eyelink with its heuristic filter switched off. But for some (rarely), its unexplicably noisy indeed. Calibration is a mixed bag, often really accurate, but also something weird distortions in the space (we think it may be when their eye model just simply doesn't fit the eye in question well--but don't know). Hope it works with contacts, try some other people too. Else complain loudly, something may be wrong with the unit.

On Sat, Aug 17, 2019 at 4:33 PM Ian Max Andolina notifications@github.com wrote:

No worries, enjoy the conference, I will try to do more detailed debugging tomorrow, and update the issue if necessary.

Now that I can easily peek at the data stream, I notice major problems with my Spectrum Pros. Compared to the Eyelink 1000, the Spectrum's raw signal is so much more noisy, my calibration errors are large and then when trying to keep within a fixation window the huge noise stops fixation from being testable. Today I tried adding a moving-median filter over # number of latest samples. Slightly better, but as i have the Eyelink and Spectrum in adjacent rooms, can test their behaviour on the same test, and the Eyelink still better. I'll try with my contact lenses tomorrow, maybe Tobii don't handle glasses properly (I just refuse to believe so much noise is normal)... Anyway, I'll probably be firing off lots of grumpy emails to Tobii support tomorrow.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/dcnieho/Titta/issues/7?email_source=notifications&email_token=AANUOGPSY33S5HSJJZ4CPPDQFAD47A5CNFSM4IL3D2Z2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4QMUHA#issuecomment-522242588, or mute the thread https://github.com/notifications/unsubscribe-auth/AANUOGOXRGS2PSBZV5T55ALQFAD47ANCNFSM4IL3D2ZQ .

iandol commented 5 years ago

Yes, the whitelist fix solved my problem, I can now drive an arduino to give rewards. So both the [backspace] and reward triggers are solved, thank you! I'll close this. Thanks for the Linux work,

I downloaded the Stampe 1993 paper that describes the heuristic filter used by Eyelink, will try to compare it to my current use of movmedian(). I also wonder whether the Savitsky-Golay filter (which Tobii use in their standardised testing report), would work for quick online smoothing. This filter allows weighting, so one can bias the average to the latest samples etc. Optional filtering for peek() may be something we could add to Titta?

dcnieho commented 5 years ago

Glad to hear its working!

Regarding Stampe: it does not seem to detail the current eyelink implementation. Using eyelink data with the filter switched off, running the Stampe filter on it creates data that is way more smoothed than they do. Using instead separate pupil and CR data, running those through the stampe filter, and then doing P-CR yourself, I also don't get anything like the filtered eyelink output. Seems they have additional tricks by now. The 1993 filters too aggressively for my taste.

There is a significant literature on online filters, you could have a look at Špakov, O. (2012) Comparison of eye movement filters used in HCI. In Proceeding of the 2012 Symposium on Eye Tracking Research & Applications, ETRA’12, ACM, pp. 281-284. I have an implementation of one of those if you are interested.

I am however not planning to implement such things in Titta, to keep it lean. Starting down this path lead to many different features, each of which are only wanted by a few people. Titta provides an interface that should make it easy to implement this oneself instead. Thanks for the suggestion tho! As said, keep them coming, always happy to consider them.

On Sun, Aug 18, 2019 at 10:47 AM Ian Max Andolina notifications@github.com wrote:

Closed #7 https://github.com/dcnieho/Titta/issues/7.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/dcnieho/Titta/issues/7?email_source=notifications&email_token=AANUOGLBFZS4TUZZREVB4QDQFEEDNA5CNFSM4IL3D2Z2YY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGOTDMNUZQ#event-2564348518, or mute the thread https://github.com/notifications/unsubscribe-auth/AANUOGJKZFNLWOV6C6BZXOTQFEEDNANCNFSM4IL3D2ZQ .

iandol commented 5 years ago

Hi, I've tried 4 different smoothing methods for online gaze filtering (movmedian+median, heuristic+median, savitzky-golay+median, median), I made a fixation demo display showing the eye position onscreen and allowing live switching between filters and #samples. I can't clearly tell much difference for online filtering method to be honest. It seems the more #samples the better is the major difference. With artificial data, the heuristic filter is good at removing 1 sample noise, reducing the SD before averaging.

Issues with delay are not so much an issue as the sampling rate of the spectrum is high enough that even 4 samples is still half the refresh rate of out Display++ (120Hz). For most of our requirements (confirming subjects are carefully fixated on a target), delay is less important that stability. I normally find I have to average over at least the last 12 (10ms) samples before the jitter starts to calm down. I'm not an eye tracking specialist, so any other advice appreciated.

If you have an implemented method from Špakov, please can you share (when you get back), thanks.

My calibration/gaze noise with glasses was greatly reduced with contact lenses tested today. I've tested 4 other people today one, one cannot be calibrated (also wearing glasses, calibrates OK on an eyelink), the others seem OK.

dcnieho commented 5 years ago

What you write makes sense for as far as i can judge. There is no one true way of course, as long as it works for your purposes its fine. At what email address can i reach you (for the code)?

On Mon, Aug 19, 2019 at 12:00 PM Ian Max Andolina notifications@github.com wrote:

Hi, I've tried 4 different smoothing methods for online gaze filtering (movmedian+median, heuristic+median, savitzky-golay+median, median), I made a fixation demo display showing the eye position onscreen and allowing live switching between filters and #samples. I can't clearly tell much difference for online filtering method to be honest. It seems the more

samples the better is the major difference. With artificial data, the

heuristic filter is good at removing 1 sample noise, reducing the SD before averaging.

Issues with delay are not so much an issue as the sampling rate of the spectrum is high enough that even 4 samples is still half the refresh rate of out Display++ (120Hz). For most of our requirements (confirming subjects are carefully fixated on a target), delay is less important that stability. I normally find I have to average over at least the last 12 (10ms) samples before the jitter starts to calm down. I'm not an eye tracking specialist, so any other advice appreciated.

If you have an implemented method from Špakov, please can you share (when you get back), thanks.

My calibration/gaze noise with glasses was greatly reduced with contact lenses tested today. I've tested 4 other people today one, one cannot be calibrated (also wearing glasses, calibrates OK on an eyelink), the others seem OK.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/dcnieho/Titta/issues/7?email_source=notifications&email_token=AANUOGK6FPXCJJ5X5QTNRLTQFJVLZA5CNFSM4IL3D2Z2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4SLUZI#issuecomment-522500709, or mute the thread https://github.com/notifications/unsubscribe-auth/AANUOGMHY3VBVBAY3I2A3FDQFJVLZANCNFSM4IL3D2ZQ .

dcnieho commented 5 years ago

Filter code is here: https://gist.github.com/dcnieho/b2d5818d667431d7dacd5dda70f98824

Have a good read through and see if it makes sense. I am not aware of good parameters for the Spectrum, but if you find some nice ones, i'd be happy to hear about them!

dcnieho commented 5 years ago

I can't recall in which of these threads we talked about analysis code, so i tell you here: Its now included in a subfolder of the demos folder. Offline analysis, specifically, fixation classification. Hope its of some use! It should at least be very noise robust, its developed to be ;)