Closed studywolf closed 11 years ago
Good idea, but this constant depends on the function being learned, and the size of the ensemble, in theory right? So maybe the multipliers should not be hard-coded?
Hmm, I don't think so. The time used to be hardcoded at 1 second and .05 seconds, I just dropped it to be 10 and 200 * dt, which just from running several of the tests didn't introduce any drop in performance. It defs shouldn't affect learning, because this compute decoders isn't used with learning. And the function isn't an issue at this point as we're just sampling the average firing rate of the neurons given different input currents. I beeeeliiieeeeeevvve that regardless of the size of the ensemble the signal should converge in roughly the same amount of time, I could try running the other tests though or examples.
OK. I'm pretty sure that the amount of data you need to fit a function depends on the complexity of the function, but maybe that doesn't really matter enough here to warrant an interface change.
On Wed, May 1, 2013 at 1:11 PM, studywolf notifications@github.com wrote:
Hmm, I don't think so. The time used to be hardcoded at 1 second and .05 seconds, I just dropped it to be 10 and 200 * dt, which just from running several of the tests didn't introduce any drop in performance. It defs shouldn't affect learning, because this compute decoders isn't used with learning. And the function isn't an issue at this point as we're just sampling the average firing rate of the neurons given different input currents. I beeeeliiieeeeeevvve that regardless of the size of the ensemble the signal should converge in roughly the same amount of time, I could try running the other tests though or examples.
— Reply to this email directly or view it on GitHubhttps://github.com/ctn-waterloo/nengo_theano/pull/37#issuecomment-17292611 .
Yeah, this is just simulating a neuron with constant J for a bit, so it shouldn't be affected by the function or ensemble size.
Ah I see what you're saying james, but that's related to the number of sample points that you have, not the time of simulation. In general we use 500, which seems to be enough, but this is also why applying learning you can get better results than with the least squares calculation.
edit: this is just dealing with the accuracy of your data at each of the sample points, and the reduced sim times seem to be as accurate!
sooo, can someone merge this?
Sure, just a sec.
Merged!
sweeet thanks
Dropped the gen time a fair amount, without suffering performance drop.