Open studywolf opened 8 years ago
For some extra context on this one, in normal nengo, the actual learning rate parameter used in the learning rule is:
learning_rate * dt / n_neurons
The intent is to have a learning system that behaves about the same if you change dt or n_neurons. So if something takes 3 seconds to learn with dt=0.001, it should also take 3 seconds to learn with dt=0.0001. Similarly, as you increase the number of neurons, the absolute magnitude of the decoder values decreases. So, if I double the number of neurons, I should halve the learning rate. So we decided that nengo should take care of those two scaling factors for you, since we seem to want them all the time.
It's probably not inconceivable that this is a fix-point problem, I'm currently playing around with a different fix-point format in a different branch. I guess this is more reason to get new_ensemble-new_learning merged ASAP.
Hi Travis, just to be clear, which branch of Nengo SpiNNaker are you on?
On 30 March 2016 at 07:39, Andrew Mundy notifications@github.com wrote:
It's probably not inconceivable that this is a fix-point problem, I'm currently playing around with a different fix-point format in a different branch. I guess this is more reason to get new_ensemble-new_learning merged ASAP.
— You are receiving this because you were assigned. Reply to this email directly or view it on GitHub https://github.com/project-rig/nengo_spinnaker/issues/113#issuecomment-203276150
New ensemble new learning! :) On Mar 30, 2016 4:14 AM, "neworderofjamie" notifications@github.com wrote:
Hi Travis, just to be clear, which branch of Nengo SpiNNaker are you on?
On 30 March 2016 at 07:39, Andrew Mundy notifications@github.com wrote:
It's probably not inconceivable that this is a fix-point problem, I'm currently playing around with a different fix-point format in a different branch. I guess this is more reason to get new_ensemble-new_learning merged ASAP.
— You are receiving this because you were assigned. Reply to this email directly or view it on GitHub < https://github.com/project-rig/nengo_spinnaker/issues/113#issuecomment-203276150
— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/project-rig/nengo_spinnaker/issues/113#issuecomment-203312118
Ah, also I just realized that there were some changes in handling the transform in the last week-ish in nengo
and using the actual most recent master version doesn't work with new_ensemble-new-learning
. I was actually using this one version of nengo: https://github.com/nengo/nengo/commit/5128e51e5133baad68c331ef8333d9e5d7136487, the next commit breaks things.
Grrr... it would have been nice if we'd made that nengo change in a way that was backwards compatible.... At least it's just a small refactoring change.
@mundya and @neworderofjamie , do you think we should sort out some sort of process for identifying when changes to core nengo break things that are dependent on it? There's starting to be a fair number of these (nengo_gui, nengo_spinnaker, nengo_ocl, nengo_mpi), and mostly we've been handling this by trying to keep such changes to a minimum. This last month or so there's been a sudden increase in these breaking changes because we're trying to get v2.1.0 sorted out. That should just be a temporary spike in changes. But I'm not sure how annoying it has been for you to keep things in sync.
I believe @mundya has done some updates to support recent changes, but
new_ensemble-new_learning
is still awaiting merging with that/master.
I have had a quick look at the issue and we are dividing the learning rate passed to Nengo by the total number of neurons in the ensemble. I think the problem here is, as Andrew suggested, that we're hitting the bottom of fixed-point accuracy i.e. 1×10⁻⁴ / 2000 < less than the least-significant-bit of our current representation (0.00003ish which for a 2000 neuron corresponds to a learning rate of 6×10⁻²). Hopefully @mundy's current work on switching to a more accurate fixed point format will help a bit but there is still going to be a limit to how low a learning rate we can handle unless we do something mega-funky.
On 30 March 2016 at 16:56, tcstewar notifications@github.com wrote:
Grrr... it would have been nice if we'd made that nengo change in a way that was backwards compatible.... At least it's just a small refactoring change.
@mundya https://github.com/mundya and @neworderofjamie https://github.com/neworderofjamie , do you think we should sort out some sort of process for identifying when changes to core nengo break things that are dependent on it? There's starting to be a fair number of these (nengo_gui, nengo_spinnaker, nengo_ocl, nengo_mpi), and mostly we've been handling this by trying to keep such changes to a minimum. This last month or so there's been a sudden increase in these breaking changes because we're trying to get v2.1.0 sorted out. That should just be a temporary spike in changes. But I'm not sure how annoying it has been for you to keep things in sync.
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/project-rig/nengo_spinnaker/issues/113#issuecomment-203500729
One random idea that might help with this: would it make sense to do stochastic rounding (with an LFSR) here? So that instead of values < 0.00003 being rounded to 0, they sometimes get rounded up to 0.00003?
Mind you, the stochastic thing turned out to not work so well for the neuron model.....
@studywolf - we've merged new_ensemble-new_learning into master, which should mean the transform issue is fixed. Do you mind looking to see if you still have the same problem? If so, we can think about how to fiddle with the fixed point.
Ah cool! I don't actually have access to a spinnaker board at the moment, @tcstewar could you try out the fix?
Oh wait neither does he, maybe @hunse you could get Brent in on it?
@bjkomer?
I can check on Friday when I get back.... :)
I was playing around with some more learning on spinnaker, and I'm getting some discrepancies between nengo and nengo_spinnaker. Basically it looks like the number of neurons might not be getting taken into account in nengo_spinnaker, in nengo there's a part of the learning rule (which I belieeeeve is just 1/n_neurons) that makes sure that the learning will occur at the same rate no matter how many neurons there are. So that maybe might not be implemented?
One place where this came up was when I tried to increase the number of neurons for learning a more complex function, which resulted in oscillatory behaviour in the learning, but when I dropped the learning rate the change in weights became too small to get picked up by the system and so no learning occurred at all!
Here are some graphs, from nengo:
and then nengo_spinnaker:
I'm not sure what's going on in the second half there, possibly completely unrelated to number of neurons. And actually as I'm looking at it it doesn't really look like the 2000 neuron population is just overshooting after that first bump...maybe things are saturating and then going to hell...I'm not sure. Here's the code I used:
Also I am on the most recent
nengo
and thenew_ensemble-new-learning
branch ofnengo_spinnaker
.