ctn-archive / nengo_theano

ABANDONED; see https://github.com/nengo/nengo instead
MIT License
3 stars 3 forks source link

Decoder calculation not as accurate as in Nengo #30

Closed studywolf closed 11 years ago

studywolf commented 11 years ago

This becomes obvious when setting the intercept, while some degradation is expected as the system tries to do least squares over the signal parts where no neurons are active, here the areas that are covered are represented significantly more poorly. Can be seen running code below, increasing the intercept results in increasingly poor representation, unlike Nengo.

This is a test file I wrote to check for it, which I'll commit as soon as the repository rework is done and we have commit access again.

"""This is a file to test the intercept parameter on ensembles"""

import math
import time

import numpy as np
import matplotlib.pyplot as plt

from .. import nef_theano as nef

build_time_start = time.time()

inter = .2; dx = .001

net = nef.Network('Intercept Test')
net.make_input('in', math.sin)
net.make('A', 100, 1)
net.make('B', 100, 1, intercept=(inter, 1.0)) 

net.connect('in', 'A')
net.connect('A', 'B')

timesteps = 1000
dt_step = 0.01
t = np.linspace(dt_step, timesteps*dt_step, timesteps)
pstc = 0.01
Ip = net.make_probe('in', dt_sample=dt_step, pstc=pstc)
Ap = net.make_probe('A', dt_sample=dt_step, pstc=pstc)
Bp = net.make_probe('B', dt_sample=dt_step, pstc=pstc)

build_time_end = time.time()

print "starting simulation"
net.run(timesteps * dt_step)

sim_time_end = time.time()
print "\nBuild time: %0.10fs" % (build_time_end - build_time_start)
print "Sim time: %0.10fs" % (sim_time_end - build_time_end)

plt.ioff(); plt.close(); plt.hold(1)
plt.plot(t, Ip.get_data())
plt.plot(t, Ap.get_data())
plt.plot(t, Bp.get_data())
plt.legend(['Input', 'A', 'B'])
plt.tight_layout()
plt.show()
xchoo commented 11 years ago

Not sure what you are talking about, but I get similar results when I run the code in Nengo, and using theano. Also, this behaviour is to be expected, which is why we have the ability to specify evaluation points when creating decoders in Nengo.

xchoo commented 11 years ago

figure_1 figure_1_n

celiasmith commented 11 years ago

actually, figure 1 looks significantly worse... -c

xchoo wrote:

figure_1 https://f.cloud.github.com/assets/1971583/397450/a34a649c-a83e-11e2-9ecd-6486a298ba41.png figure_1_n https://f.cloud.github.com/assets/1971583/397451/a561be74-a83e-11e2-96ce-7b4322f8c202.png

— Reply to this email directly or view it on GitHub https://github.com/ctn-waterloo/nef-py/issues/30#issuecomment-16584405.

studywolf commented 11 years ago

Agreed. And the problem doesn't fully resolve when specifying evaluation points in the Theano code. Also the discrepancy gets worse as you increase the lower intercept.

xchoo commented 11 years ago

Hmm. Well, there is definitely a problem somewhere. The following figure (just ran the code a few more times) shows a more extreme example of the distortion.

figure_2

hunse commented 11 years ago

It might help if somebody bites the bullet and adds the analytic equations for LIF tuning curves. Terry's method of estimating tuning curves is nice and generic, but it makes it difficult to compare Nef-Theano to Nengo.

tcstewar commented 11 years ago

The analytic equations are in there already -- there's a rate mode version of the LIF neuron. So that should be straightforward to make use of.

However, I'm pretty sure that the bigger problem is that we're not adding noise when computing Gamma. In nef-java (formerly "Nengo"), we add gaussian noise to the A matrix before doing Gamma=A*A.T, and then when inverting we throw out singular values below some level. Right now in the Theano version we only do the second part of that.

My original hope was that by actually running the neurons for some period of time, you get a noisy estimate already, so you don't have to add noise onto the A matrix. However, I didn't do anything in terms of figuring out how much time to run them for to gather the A matrix. So it might be that if we just decrease the time, the decoding will improve.

Or we can just add noise onto the matrix like I should have in the first place.

:)

celiasmith commented 11 years ago

I vote for option 2... option 1 might decrease the amount of noise -c

tcstewar wrote:

The analytic equations are in there already -- there's a rate mode version of the LIF neuron. So that should be straightforward to make use of.

However, I'm pretty sure that the bigger problem is that we're not adding noise when computing Gamma. In nef-java (formerly "Nengo"), we add gaussian noise to the A matrix before doing Gamma=A*A.T, and then when inverting we throw out singular values below some level. Right now in the Theano version we only do the second part of that.

My original hope was that by actually running the neurons for some period of time, you get a noisy estimate already, so you don't have to add noise onto the A matrix. However, I didn't do anything in terms of figuring out how much time to run them for to gather the A matrix. So it might be that if we just decrease the time, the decoding will improve.

Or we can just add noise onto the matrix like I should have in the first place.

:)

— Reply to this email directly or view it on GitHub https://github.com/ctn-waterloo/nef-py/issues/30#issuecomment-16588471.

studywolf commented 11 years ago

I added in the code to put the noise on the diagonal, it seems to have taken care of the problem! I'll upload it as soon as we have commit permissions again.