Closed studywolf closed 11 years ago
Not sure what you are talking about, but I get similar results when I run the code in Nengo, and using theano. Also, this behaviour is to be expected, which is why we have the ability to specify evaluation points when creating decoders in Nengo.
actually, figure 1 looks significantly worse... -c
xchoo wrote:
figure_1 https://f.cloud.github.com/assets/1971583/397450/a34a649c-a83e-11e2-9ecd-6486a298ba41.png figure_1_n https://f.cloud.github.com/assets/1971583/397451/a561be74-a83e-11e2-96ce-7b4322f8c202.png
— Reply to this email directly or view it on GitHub https://github.com/ctn-waterloo/nef-py/issues/30#issuecomment-16584405.
Agreed. And the problem doesn't fully resolve when specifying evaluation points in the Theano code. Also the discrepancy gets worse as you increase the lower intercept.
Hmm. Well, there is definitely a problem somewhere. The following figure (just ran the code a few more times) shows a more extreme example of the distortion.
It might help if somebody bites the bullet and adds the analytic equations for LIF tuning curves. Terry's method of estimating tuning curves is nice and generic, but it makes it difficult to compare Nef-Theano to Nengo.
The analytic equations are in there already -- there's a rate mode version of the LIF neuron. So that should be straightforward to make use of.
However, I'm pretty sure that the bigger problem is that we're not adding noise when computing Gamma. In nef-java (formerly "Nengo"), we add gaussian noise to the A matrix before doing Gamma=A*A.T, and then when inverting we throw out singular values below some level. Right now in the Theano version we only do the second part of that.
My original hope was that by actually running the neurons for some period of time, you get a noisy estimate already, so you don't have to add noise onto the A matrix. However, I didn't do anything in terms of figuring out how much time to run them for to gather the A matrix. So it might be that if we just decrease the time, the decoding will improve.
Or we can just add noise onto the matrix like I should have in the first place.
:)
I vote for option 2... option 1 might decrease the amount of noise -c
tcstewar wrote:
The analytic equations are in there already -- there's a rate mode version of the LIF neuron. So that should be straightforward to make use of.
However, I'm pretty sure that the bigger problem is that we're not adding noise when computing Gamma. In nef-java (formerly "Nengo"), we add gaussian noise to the A matrix before doing Gamma=A*A.T, and then when inverting we throw out singular values below some level. Right now in the Theano version we only do the second part of that.
My original hope was that by actually running the neurons for some period of time, you get a noisy estimate already, so you don't have to add noise onto the A matrix. However, I didn't do anything in terms of figuring out how much time to run them for to gather the A matrix. So it might be that if we just decrease the time, the decoding will improve.
Or we can just add noise onto the matrix like I should have in the first place.
:)
— Reply to this email directly or view it on GitHub https://github.com/ctn-waterloo/nef-py/issues/30#issuecomment-16588471.
I added in the code to put the noise on the diagonal, it seems to have taken care of the problem! I'll upload it as soon as we have commit permissions again.
This becomes obvious when setting the intercept, while some degradation is expected as the system tries to do least squares over the signal parts where no neurons are active, here the areas that are covered are represented significantly more poorly. Can be seen running code below, increasing the intercept results in increasingly poor representation, unlike Nengo.
This is a test file I wrote to check for it, which I'll commit as soon as the repository rework is done and we have commit access again.