ctn-waterloo / modelling_ideas

Ideas for models that could be made with Nengo if anyone has time
9 stars 1 forks source link

Working memory and sustained neural activity #3

Open tcstewar opened 9 years ago

tcstewar commented 9 years ago

In pretty much every working memory task, you get neural activity that looks like this:

http://jn.physiology.org/content/jn/91/3/1424/F12.medium.gif

When the stimulus is presented, the firing rate goes up, then it sustains over the memory period, and then it disappears when the memory is no longer needed.

It strikes me that this is exactly what you'd expect if you have an integrator and you're feeding it input without scaling by tau. That is, this is exactly a spa.Memory with an arbitrary semantic pointer being fed in. It's not really an integrator -- if there's input it fairly quickly goes to that vector value, but at a larger magnitude (wherever the saturation stops it). Then when the input is taken away, it settles back down to a more reasonable vector length.

My intuition is that this saturated recurrence should give exactly the same neural effect as seen in the working memory data. Indeed, it might even predict the size of that drop (it should drop to around half of the firing rate when the stimulus is being presented). I think this would end up being dependent on the sparsity of the representation (i.e. the intercepts distribution).

celiasmith commented 9 years ago

Interestingly, I believe the SSN model that Miller discussed when he was here does this as well (even for non-memory circuits, where you see such effects as well).

We've also used neural adaptation to get this kind of effect (Singh & Eliasmith, 2006).

-c

tcstewar wrote:

In pretty much every working memory task, you get neural activity that looks like this:

http://jn.physiology.org/content/jn/91/3/1424/F12.medium.gif

When the stimulus is presented, the firing rate goes up, then it sustains over the memory period, and then it disappears when the memory is no longer needed.

It strikes me that this is exactly what you'd expect if you have an integrator and you're feeding it input /without scaling by tau/. That is, this is exactly a |spa.Memory| with an arbitrary semantic pointer being fed in. It's not really an integrator -- if there's input it fairly quickly goes to that vector value, but at a larger magnitude (wherever the saturation stops it). Then when the input is taken away, it settles back down to a more reasonable vector length.

My intuition is that this saturated recurrence should give exactly the same neural effect as seen in the working memory data. Indeed, it might even predict the size of that drop (it should drop to around half of the firing rate when the stimulus is being presented). I think this would end up being dependent on the sparsity of the representation (i.e. the intercepts distribution).

— Reply to this email directly or view it on GitHub https://github.com/ctn-waterloo/modelling_ideas/issues/3.

tcstewar commented 8 years ago

I've got a quick example of trying to replicate this effect here:

https://github.com/tcstewar/testing_notebooks/blob/master/Working%20memory%20overshoot.ipynb

The main thing I need to double-check is what the analysis is to generate that graph. At the moment, I'm just averaging together the activity of all the neurons in the ensemble that tend to fire more for the encoded stimulus, but I'm not sure that's equivalent to the neurophysiology experiment.