leaflabs / WaspNet.jl

GNU General Public License v2.0
13 stars 6 forks source link

Add Liquid State Machine network #13

Closed GuillaumeLam closed 2 years ago

GuillaumeLam commented 3 years ago

A reservoir computing paradigm in which a highly recurrent reservoir is fed inputs and the outputs are observed.

SBuercklin commented 3 years ago

What would need to be added to support a liquid state machine?

You can define recurrent layers within WaspNet already; you just need to specify which Layers feed into any given Layer, which can include itself. From there, a layer of LIF neurons can act as your LSM and you perform linear regression on the outputs to predict. That regression can be handled by any existing regression package within Julia.

This is my understanding of LSMs, but if there are additional features that I am missing, let me know.

GuillaumeLam commented 3 years ago

Your understanding seems to be right on the money! The recurrent layers in WaspNet will definitely simplify the task. I am, however, trying to follow the methodology of this paper as it seems to have quite good stability (https://www.frontiersin.org/articles/10.3389/fnins.2019.00883/full). To stabilize the LSM, they use excitatory and inhibitory neurons. As excitatory are already included, I would simply need to add the second type of neuron. Additionally, as there might be quite a few neurons in the LSM I was thinking of using sparse arrays, any suggestions/concerns?

Eventually, the goal of my project would be to add a learning rule for the LSM portion. Some learning rules are based on the timing of spikes to re-weight connections. This seems to be possible as access to past data is possible. Any design idea on how to simplify this for the future or should it be relatively straightforward?

Thank you for you time!

GuillaumeLam commented 3 years ago

Also, I would like to add the feature of initializing the LSM with a certain topology (ex like giving a brain map or smth). Seems like it should be straight-forwards to flatten it to a layer. I was also wondering if the pruning util function could be applied to the LSM layer? As this layer will probably contain quite some neurons, it will probably speed up simulation time for similar results!

SBuercklin commented 3 years ago

I'm not sure if he still uses this, but @charleswfreeman was doing LSM work with WaspNet.jl while we developed it (nearly a year ago at this point). He may be able to speak to some of your questions.

Regarding the usage of SparseArrays, Layer{L, N, A, M} has M<:AbstractArray{T,2}, so if you supply a SparseArray it should just work. At the sizes I used, I didn't see much of a speedup, but past a certain point and with appropriate sparsity, you should see improved performance. That will likely have to be determined empirically.

pruning was developed essentially for that purpose. I don't recall how it was implemented, but if it's working for you for now that's good to hear.

Generally speaking for any new neuron types (inhibitory for your case), you should feel free to implement them. If you think they'd be useful to other people, you can write up some test cases (/test/neuron_tests) and submit a PR.

GuillaumeLam commented 3 years ago

Hello, my progress is going well. I have a full model running through the simulation and spitting out number and simply need to train the last layer. In terms of the inhibitory neurons, I added a negative spike to mimic the hyperpolarization in the postsynaptic neuron. However, I was thinking that it could be a good idea to redesign AbstractNeuron to allow for an excitatory and inhibitory version of all neuron implementation. My idea would be to have like a spike/out method that all abstractneuron implement (exc -> output = 1, inh -> output = -1). Not entirely sure if julia would work best in this fashion. Please lmk if another simpler implementation jumps to your eyes. Anyways, this would allow for all the current code like WaspNet.LIF() to still be valid while adding the functionality I need.

SBuercklin commented 3 years ago

Oh, this was something I really liked about WaspNet! If your inhibitory neurons are the same as your excitatory neurons, other than the sign of the output spike being different (+1 for excitatory, -1 for inhibitory), you should be able to do something I called a NeuronWrapper. This assumes, say, the dynamics of an LIF neuron remain the same, you just need to invert the output signal. It works something like this:

struct InhibNeuron{N<:AbstractNeuron} <: AbstractNeuron
    inner_neuron::N
end

function update(neuron::InhibNeuron, input_update, dt, t)
    inner_output, return_neuron = update(neuron.inner_neuron, input_update, dt, t) # update the inner neuron

    return (-1*inner_output, InhibNeuron(return_neuron))
end

Now, this is a weird AbstractNeuron which has another AbstractNeuron inside of it. The dynamics of this neuron work exactly the same as the inner neuron, the only difference is the output is now multiplied by -1. If I understand correctly, this should give you your inhibitory neuron since a +1 from the inner neuron would become a -1, effectively turning it inhibitory.

I haven't tested this specific case, but I did something similar when I was using WaspNet.jl more regularly. I convolved spikes with an exponentially decaying kernel to add a temporal component to spikes. If this approach interesting, I'd suggest playing around with it and seeing if it works for you. As always, let me know if you have any questions

GuillaumeLam commented 3 years ago

Thank you for the code block. This is definitely a more elegant way to achieve the same result! I am still a beginner at Julia so the best practices are still a bit unknown to me. Thanks!