ctn-waterloo / modelling_ideas

Ideas for models that could be made with Nengo if anyone has time
9 stars 1 forks source link

log-scale time cells #44

Open tcstewar opened 8 years ago

tcstewar commented 8 years ago

In the invited talk given by Zoran Tiganj, some data on time cells was presented, both in PFC and in hippocampus. Showed the "some neurons fire a given amount of time into the trial" phenomenon, and so are called time cells. (e.g. http://people.bu.edu/marc777/docs/TiganjEtal-PFCtimecells.pdf )

These cells show up in our existing working memory models (if we also represent time). However, our current models represent time on a linear scale. The neural data indicates this should be a log scale.

One way to think about this (thanks to @arvoelke ) is that this is an oscillator that slows down the farther through its cycle it gets. Here's a quick implementation of that:

model = nengo.Network()
with model:

    T = 10.0

    def stim_func(t):
        if 0 < t % T < 0.05:
            return 10
        return 0
    stim = nengo.Node(stim_func)
    a = nengo.Ensemble(n_neurons=500, dimensions=2, intercepts=nengo.dists.Uniform(0.3,0.3))
    nengo.Connection(stim, a[0])

    def temporal(x):
        theta = np.arctan2(x[1], x[0])
        r = np.sqrt(x[0]**2+x[1]**2)

        if theta < 0:
            theta += np.pi*2

        scale = 0.5
        e_theta = np.exp(scale*theta)
        e_theta += 0.15
        theta = np.log(e_theta)/scale

        return r*np.cos(theta), r*np.sin(theta)
    nengo.Connection(a, a, function=temporal, synapse=0.1)

If we try running this model, we get neurons who fire at different times:

image

If we sort these neurons by the time of their peak firing, we get something that looks kinda like the original neural data:

image

Here's a notebook doing this analysis:

https://github.com/tcstewar/testing_notebooks/blob/master/Temporal%20log%20scale.ipynb

tcstewar commented 8 years ago

I would also really want to try exploring this using something more like @arvoelke 's delay network for actually storing changing information over time, but with this sort of logarithmic temporal scaling thing happening.

arvoelke commented 8 years ago

Cool! It's always nice when the intuition works out. This is also related to the work I mentioned briefly about packet communication in cortex (Luczak, 2015), so I had been thinking about this for a little bit. Definitely more to explore from here.

A few things that I think are worth being explicit about from the side of theory:

I believe the last part is a good way to bridge a portion of Zoran's talk with the NEF (at least that's how I took it). Note that I'm also glossing over a number of things (linearity, sharpness of the delta input, radii, and the difference between input current and activity) for sake of clarity (and so the situation is slightly more nuanced).

I think using the delay network as the basis filter would give roughly the same results, but would offer more degrees of freedom to play with (two dimensionality parameters, and an explicit speed parameter).

arvoelke commented 8 years ago

A little more on the third bullet point while we're at it. We can also visualize this with the same old trick of taking the SVD of the gamma matrix of the activities (the first plot), but it's no longer as nice as just the Legendre polynomials.

A = sim.data[p_spikes]
gamma = np.dot(A.T, A)
U, S, V = np.linalg.svd(gamma)
chi = np.dot(A, U)

pylab.figure(figsize=(12, 7))
pylab.title("SVD of Gamma Matrix")
for i in range(4):
    pylab.plot(sim.trange(), chi[:, i] / len(chi), label=r"$\chi_%d = %s$" % (
        i, np.sqrt(S[i]) / len(chi)))
pylab.legend(loc='best')
pylab.show()

download 31

This basically tells you that the functions you can best decode out of the above network are going to be linear combinations of these wonky filters that can be categorized at a qualitative level.