Open djsaunde opened 5 years ago
Yes, you are right. We need to discuss how the dynamic synapse can interact with minibatch.
We do however a weights decay that also need to be address with the minibatch
Weight decay is not affected by simulation (it's set, constant, in advance), so there's no need for anything to change when using minibatching.
Also, I've realized a way to implement exponentially decaying synaptic currents without needing to duplicate weights across the minibatch dimension. I'll implement it when I get a chance.
Hi guys! Has there been any update on this? Is it supposed to simulate varying concentration of Na+ ions in the synaptic cleft (https://pubmed.ncbi.nlm.nih.gov/42066/). Also I'm curious as to what the initial value of the decaying synaptic current would be?
BindsNET can simulate various changes (linear or non-linear) to the conductance of the connection and for neuron behavior. However, BindsNET is not simulating the exact behavior of the specific ion channel. For adding specific behavior one needs to adapt the code to fit the desired behavior.
Ahh yes. I was merely giving a possible example of biological plausibility of a decaying synaptic current as @djsaunde had described. I was thinking on working on this as I've gotten myself familiar with the BindsNet code, if no one's currently taken this up.
Wonderful!
Now that I'm looking deeper into it, I'm having issues visualizing how this would be different from the current decay in LIF neurons that is already implemented. Even if we make the decay per synapse, then summing this up (during forward prop) would lead the same end result as the temporal decay in LIF right (See: https://github.com/BindsNET/bindsnet/blob/master/bindsnet/network/nodes.py#L509) ?
Yes, It could be similar to the decay in LIF
Currently, we implement synapses without their own dynamics. Inputs to a layer are computed by left-multiplying a vector of pre-synaptic spikes against a synapse weight matrix. However, synapse currents often follow temporal dynamics, such as exponential decay.
We can think of the current implementation as using temporal decay with positive infinity as the time constant (
tau = np.inf
). We can generalize it such that this is kept as the default, but if the user passes in a finite time constant, e.g., 100ms, synaptic currents can be computed as the sum of the present, exponentially decaying current, and the result of left-multiplying the pre-synaptic spike vector and synapse weights.Having synapse parameters with their own dynamics in SNNs is not uncommon. This will be our first foray into it, however. My biggest concern is with minibatch processing: having stateful synapses mean that we must duplicate them across the minibatch dimension. Duplicating weight matrices will cause us to run out of memory quickly!