AStupidBear / SpikingNeuralNetworks.jl

Julia Spiking Neural Network Simulator
Other
37 stars 16 forks source link

change from internal integrator to DiffEq.jl (only IZ) #16

Open Modatu opened 4 years ago

Modatu commented 4 years ago

This is only a first try and so far introduces breaking changes!

What works: Integration of several IZ neurons over time (see examples/iz_neuron.jl) in a broadcasting friendly manner (thanks a lot to @jpsamaroo).

Questions that arise for me now:

Next steps (as suggested by @jpsamaroo):

jpsamaroo commented 4 years ago

Great work!

can parameters be in ArrayPartitions, too? So that we can have different types of neurons in one layer?

Yes, absolutely, that's a great idea!

do we need the IZ struct additionally to the IZ Parameter? Is that now better handled through the fire and affect! functions?

I think we generally want to keep the IZ struct, as we need to keep (at a minimum) the v and u data somewhere.

One thing that needs doing: you haven't yet added the new dependencies (OrdinaryDiffEq, Parameters, etc.) to the Project.toml, so this package won't necessarily work on a regular user's system until that's done.

I'm going to make my own PR or two to get the package ready for this PR, specifically I can handle:

“thread” du parameter through functions leading from sim! to integrate! for in-place (IIP)

jpsamaroo commented 4 years ago

Sorry, nevermind, this PR hasn't yet been rebased on master. @Modatu can you rebase on SpikingNeuralNetworks master branch?

Modatu commented 4 years ago

Can I rebase? Isn't that only possible with write permissions on this repo?

jpsamaroo commented 4 years ago

You do the rebase locally and then force-push to your branch. Assuming you push your commits to a remote called Modatu, the main SpikingNeuralNetworks repository is origin, and you've already got your feature_diffeq_integrator branch checked out, you can do:

git fetch --all
git rebase origin/master
git push -f Modatu feature_diffeq_integrator

If you'd like practice with git rebase, I recommend trying the tutorial at https://git-rebase.io/

Modatu commented 4 years ago

@jpsamaroo Thanks! I rebased to master and added the dependencies to the Project.toml

Modatu commented 4 years ago

Going forward I will try to incorporate the IZParameter into the ArrayPartition. I have an idea how this could work.

jpsamaroo commented 4 years ago

Ok @Modatu, can you rebase again? My latest PR has made it so that we can specify what types to use for neuron/synapse internal fields, which is key for doing fancy things like doing Autodiff or running on GPUs (although it may not have been strictly required, it's very good to have).

What we need to do now is to modify src/main.jl to use a DiffEq interface (like you're using here), and then modify the rest of the neuron/synapse models to use the same interface. I didn't change the actual simulation interface like I said I would, since that would make it hard to make my PR non-breaking will also not inserting dead/ugly code.

Let me know if you'd like help with any of this, as I can push commits to your branch.

jpsamaroo commented 4 years ago

Also regarding the ArrayPartition: we should automatically convert our neuron/synapse structs into two ArrayPartition s within the simulator interface functions in src/main.jl. One will be for parameters, the other will be for the rest of the fields, in general. Then we can construct the ODEProblem and call solve for the user with a default solver and the specified timestep. Advanced users should be expected and encouraged to do some of this by themselves to exert more control, and I think we can cater to them with conveniences in future PRs.

Note that some of the models that use SparseArrays probably won't work well like this (anything that mentions rowptr/colptr is usually sparse array-based); we might need to change those models to use a different implementation. I would worry about those last.

Modatu commented 4 years ago

@jpsamaroo I did the rebase, but it was a little bit confusing and the auto merge did not really work. I hope it is ok this way.

I looked at integrating the Parameters into ArrayPartitons, that was not to straightforward at a first glance. What should be fairly easy for me to do is changing the simulator interface in src/main.jl.

jpsamaroo commented 4 years ago

Ok, no worries, I'm fixing it up locally and also getting the interface prepared so that this actually works. I'll ping you once it's pushed.

jpsamaroo commented 3 years ago

I've completed the internal changes to make SNN's sim! API use OrdinaryDiffEq for solving, and done so in a way that is almost 100% type-stable and should be quite performant (although it now requires an extra step, SNN.prepare, to construct an efficient object to be used for solving). I need to convert the rest of the models to this format, and then make sure that most of the pre-existing API still works.

One thing to note is that the old sim!(N, S) call is now quite inefficient, since it constructs a new object from ArrayPartitions internally; however, the tradeoff is that solving of larger models for longer durations will be much more efficient, and is now trivial to parallelize and run on accelerators.

Modatu commented 3 years ago

@jpsamaroo: Sorry for the long delays. Last year and this year are somewhat crazy around here. ;-) I pulled your changes, but now the code in /examples/iz_neuron.jl does not work anymore. There are also some odd things I can not wrap my head around. function integrate!(_du, _u, p::IZ, t) does not use t anymore and I also do not really get your changes in the integrate! function. function condition(iz::IZ, u) v, u = u.x any(_v->_v > 30f0, v) end does not use IZ.

BTW, if I try to run your code in examples/iz_neuron.jl I get: ERROR: LoadError: MethodError: no method matching integrate!

Otherwise, I really like the progress!

jpsamaroo commented 3 years ago

Due to the complexity of this approach, and my lack of free time, I've mostly shelved this PR. I know @wsphillips is interested in doing this sort of thing, but he's starting from a purely DiffEq/MTK approach, which I think is probably a better way to get started.

Modatu commented 3 years ago

Ok, thats unfortunate. Is @wsphillips working on that from scratch or does he want to incorporate it into SpikingNeuralNetworks.jl It would be awesome if we could have something like nengo in Julia. I would be very interested to simulate/work with Spaun or Leabra like models. And I am happy to help, if I can.

wsphillips commented 3 years ago

Is @wsphillips working on that from scratch or does he want to incorporate it into SpikingNeuralNetworks.jl

Writing from scratch using ModelingToolkit.jl + component modeling so that we can flexibly performance optimize user models and take advantage of the SciML ecosystem.

Modatu commented 3 years ago

Sounds great, I opened a discussion in you FutureNeuralSimulation repo. Maybe we can move there for further discussion?