SpiNNakerManchester / sPyNNaker

The SpiNNaker implementation of the PyNN neural networking language
Apache License 2.0
101 stars 43 forks source link

Supporting parameter changes between runs #181

Closed felix-schneider closed 7 years ago

felix-schneider commented 8 years ago

In a more general idea than #180, it would be nice to have a mechanism by which neuron models can accept changes to their parameters, which they write to SpiNNaker SDRAM before the next run to update the C applications running on the machine. Two possible uses for this (that I see right now), are changing the rate of a SpikeSourcePoisson (as in #180), and changing the rate of a regularly firing neuron (a neuron with a positive I_bias).

From what I understand of the mapping process, one would add an algorithm to the list of "partitioning algorithms" that are run with do_mapping and have vertices track changes to their parameters.

I have two questions about this: 1: How does this interact with reset? That seems to have been a concern the last time we discussed a similar feature. 2: How should the SpiNNaker application be notified of possible changes? A flag that is written to SDRAM? SDP?

alan-stokes commented 8 years ago

Hi Felix,

the general description for the changing of params is not just for changing the spike rate, but the parameters set by the population. such as I_offset, V-rest etc. these do indeed effect the spike rate, but I wouldn’t define it as such.

I don’t think you’d need to do anything that serve. My logic would be that:

  1. when the population runs a set function for neuron params and has already ran (can be found from spinnaker.py), it calls set of the vertex with a transceiver.
  2. The vertex then updates its internal state (and here you would need to store the original value)
  3. It uses the transceiver to read the app pointer table on chip and locate where it needs to write this new param.
  4. You'd also need to add a new sdp message to reread neuron region, and modify the c code to understand and do this.

during reset. All you'd need to do is

  1. tell each vertex to reset its neuron params, and it would switch the neuron params back to the originals in python. C wont matter, as those are in the application data file stored on host disk, and so will move onto c during the reset functionality.

interestingly, whats the original value if you build a pop and then set its params before calling run? is it the first or second value? mmmmm? food for thought.

alan-stokes commented 8 years ago

Just to add, having had food for thought in both senses. I looked at pynn0.7 api and found this bit in the definition of run:

"If you wish to reset the simulation state to the initial conditions (time t = 0), use the reset() function."

so id interpret that changing a neuron param before run results in change the original parameter, and is not considered a change in these terms, as t still = 0 until run has been called.

Hopefully @apdavison will correct me if I'm wrong.

Alan

rowleya commented 8 years ago

I would agree that reset should reset to just before run was called the first time. Generally users will set up their network and then change a couple of the parameters (e.g. initialise_v, or setting some of the neural parameters) before calling run for the first time, so you don't want to undo those changes!

apdavison commented 8 years ago

reset() should not change parameter values (i.e. there is no concept of "the original" parameter values), but it should set state variables to their initial values (hence the arguments given to initialize() need to be cached).

felix-schneider commented 8 years ago

Except for membrane voltages, what are other state variables in the currently supported neuron models?

rowleya commented 8 years ago

It depends on the neuron model - LIF neurons only have membrane voltage as a state variable but IZK neurons have v and u. Each model should know its state variables though, so if this is done through an interface, it shouldn't matter. Basically, the model should store the initial values of the state variables, and if reset is called, it should write these values back in to SDRAM (or whatever mechanism is going to be used - basically they should reach the DTCM of the relevant cores).

felix-schneider commented 8 years ago

If I call initialize_v() several times (maybe with a run in between) and then reset(), which value does the model reset to?

Does the Data Specification need to be updated when a parameter value changes?

apdavison commented 8 years ago

If I call initialize_v() several times (maybe with a run in between) and then reset(), which value does the model reset to?

It should reset to whatever value was set by the last (most recent) call.

(By the way, the PyNN API does not have an initialize_v() function or method. It should be initialize(cells, 'v', value) or population.initialize('v', value))

rowleya commented 8 years ago

We have an interface between initialize and the models - in models with a v state variable it is initialize_v (in izk there is also an initialize_u).

felix-schneider commented 8 years ago

That leaves the other question: What exactly is the purpose of the Data Specification and does it need to be regenerated when we change a parameter?

rowleya commented 8 years ago

The data specification is mini-program that, in the future, will be executed on chip. This will avoid the need to transfer as much data to the machine, as the machine will then be able to expand up things like parameters and connectivity. Currently, the spec is executed on the host machine, and the generated data is transferred to the machine.

You don't need to regenerate the spec if you are just going to pass the parameters directly. If you are going to do something more complex, you might use the spec as an intermediate, however the spec executor on chip will be a separate binary, so you would have to stop and start this.

If you are only allowing the alteration of parameters between runs, I wouldn't worry too much about the data transfer speed at this point, as the parameters are usually not too big anyway. In this case, don't worry about the data spec. If you are looking at changes to the synapse parameters, this will become more important.

AlexRast commented 8 years ago

On 14/12/15 16:33, Felix Schneider wrote:

In a more general idea than #180 https://github.com/SpiNNakerManchester/sPyNNaker/issues/180, it would be nice to have a mechanism by which neuron models can accept changes to their parameters, which they write to SpiNNaker SDRAM before the next run to update the C applications running on the machine Two possible uses for this (that I see right now), are changing the rate of a SpikeSourcePoisson (as in #180 https://github.com/SpiNNakerManchester/sPyNNaker/issues/180), and changing the rate of a regularly firing neuron (a neuron with a positive I_bias)

I just wanted to add a comment that on the second of these this ought to be implemented in future via the StepCurrentSource. Since that's likely to use buffered in, it could easily support a current injection that was itself drawn from some external utility or program. So dynamically tuned rate neurons I think might best be handled like this.

From what I understand of the mapping process, one would add an algorithm to the list of "partitioning algorithms" that are run with |do_mapping| and have vertices track changes to their parameters

I have two questions about this: 1: How does this interact with reset? That seems to have been a concern the last time we discussed a similar feature 2: How should the SpiNNaker application be notified of possible changes? A flag that is written to SDRAM? SDP?

— Reply to this email directly or view it on GitHub https://github.com/SpiNNakerManchester/sPyNNaker/issues/181.

alan-stokes commented 8 years ago

Howdi all,

So having caught up with all this. It does produce a wee little issue for reset. Current reset just takes the application data file generated from DSE and reloads that onto the machine, even when DSE on chip occurs, this is still the initial DSG file, which includes the neuron_params at t=0, as well as new binary images. As @apdavison stated, the reset should keep the neuron params at the LAST state. so a code run such as -

pop(v_offset = x), set(v_offset = y), run(t), run(t2), set(v_offset, z), run(t3), set(v_offset, a), reset()

The reset should result in v_offset == a, whereas the current reset will make it == y

This would mean reset either -

  1. needs to rerun dsg and dse for the vertex with changes, if there’s been neuron param changes, and load this over, or
  2. load the original application data (which resets weights, delays etc[all the state variables]) and then loads the neuron param changes separately, before the binary is loaded onto the machine. That will need a new interface function to be plugged in, as you'll need to iterate though the vertices and ask if they need to load new changes.

The second does mean there's a mismatch between dsg and whats on sdram, but for speed, i think we could live with that.

Alan

felix-schneider commented 8 years ago

So I'm writing this and I'm a bit stuck at this problem:

How do I find out the correct memory address in python? A assume the writing to memory is to be done with Transceiver.write_memory, but it does not content itself with a region and an offset, it needs an actual address. I can find out the region with constants.POPULATION_BASED_REGIONS.NEURON_PARAMS.value and I can probably figure out any offset I need, but how do I get the actual address from that?

felix-schneider commented 8 years ago

I managed it. I tested for SpikeSourcePoisson rate changes and it seems to work.

I have created a pull request #182 as well as SpiNNakerManchester/SpiNNFrontEndCommon#31 that you can review and test at your convenience.

Edit: I should perhaps note that I tested it for a single-neuron SpikeSourcePoisson. I will test more tomorrow. Edit2: It worked for a Population of 999 spike sources, which should be divided up into two vertices as far as I understand. I'm frankly amazed that that worked.

alan-stokes commented 7 years ago

this has been circumvented by changes for all neuron params between run calls in pynn. works in master