Closed PTraeder closed 7 years ago
@heplesser @jougs @Silmathoron @DimitriPlotnikov
I assume that the buffers you mention here are the RingBuffer
s for spikes and currents in the models.
_psc_
models, it is pA_psc_delta
models, where it is mV_cond_
models it is nSJust in case, I'm also referencing the post that summarizes a long discussion on how we might simplify the interaction between spikes and state variables, just so that the unit discussion does take into account the way the buffer might be used.
Because of this, I would be in favour of dimensionless buffers where the unit is added by the convolution function depending on the target variable that will be affected (this includes the possible existence of a normalization factor.
@heplesser NESTML is modeling language for nest and, therefore, I would appreciate nest interpretation. Current buffers, are currently fixed to be pA
. The idea is to let the modeler to state the unit on the buffer. E.g. for the iaf_psc_alpha
neuron iaf_cond_alpha_nestml:
equations:
# g_in, g_ex model an alpha shape
function I_syn_exc pA = convolve(g_ex, spikeExc) * ( V_m - E_ex )
function I_syn_inh pA = convolve(g_in, spikeInh) * ( V_m - E_in )
end
input:
spikeInh nS <- inhibitory spike
spikeExc nS <- excitatory spike
end
end
@Silmathoron: I contributed in #330
@DimitriPlotnikov why are your spikes buffers in nS
in the example?
@Silmathoron it is a cond
-model. See iii) in @heplesser's comment.
Yeah, but given the method you propose for the synaptic current (using convolve
with a shape which should possess a dimension -- cf. #330), I think the spike buffer should be dimensionless, corresponding to HEP's i)
I think any other choice would lead to non-general solutions where we would have to deal with special cases.
Therefore, I propose spike buffers as pure spike storage which are coupled to the other variables through the convolve
function which is in charge of normalizing them and giving the right unit.
@Silmathoron What do you mean with normalization factors? Initial values? @heplesser what is your opinion?
We had a discussion about this internally on Tuesday and came to the conclusion that actually the shape should be dimensionless and the buffer should have the unit. There are a number of reasons why we think this is the right way to go:
@Silmathoron: I think that this is the most general solution we could have. If we did it the other way around, there would be no straight-forward way for the convolve
function to know the right unit to add in the context it is used.
@DimitriPlotnikov normalization for alpha shapes, for example (so that the synaptic weight is actually the maximum value of the PSC).
@jougs You mean that the buffer gathers the weighted inputs from different synapses that arrive on a single neuron, right? But the synaptic weight is just a factor and is interpreted in terms of a dimensioned PSC only once the model is known, so I'm not sure I understand why this would be a problem. I do understand the potential issue for using the shapes across models, though, even if I think we should be able to find a solution to that...
And I didn't get the last auto-documentation part :s
I must say I'm still unsure about having a same object (a spike buffer) get different units depending on the model... are you really sure there is no other way? Could we organize a small meeting to discuss this in details? It's a little complicated to explain everything on Git, so for now I don't really understand what the problems you mentioned are...
So for example how would you handle the dimensions in the model @DimitriPlotnikov started to write in #330?
Proposal for the discussion:
The first pair shows the introductions of the types on the buffer and the introduction of the initial_values
blocks that captures variables which are modeled as an ODE
neuron iaf_psc_alpha_old:
state:
V_m mV
end
equations:
shape I_shape_in = (e/tau_syn_in) * t * exp(-1/tau_syn_in*t) # shape is "unitless"
shape I_shape_ex = (e/tau_syn_ex) * t * exp(-1/tau_syn_ex*t)
function I pA = curr_sum(I_shape_in, in_spikes) + curr_sum(I_shape_ex, ex_spikes)
V_m' = -1/Tau * (V_m - E_L) + 1/C_m * I
end
input:
ex_spikes <- excitatory spike
in_spikes <- inhibitory spike
currents <- current
end
output: spike
update:
# spikes are applied automatically. computed initial values are used.
end
end
neuron iaf_psc_alpha_new:
initial_values:
V_m = 0mV # <- CHANGE
end
equations:
shape I_shape_in = (e/tau_syn_in) * t * exp(-1/tau_syn_in*t) # shape is "unitless"
shape I_shape_ex = (e/tau_syn_ex) * t * exp(-1/tau_syn_ex*t)
function I pA = convolve(I_shape_in, in_spikes) + convolve(I_shape_ex, ex_spikes) # <- CHANGE
V_m' = -1/Tau * (V_m - E_L))+ 1/C_m * I
end
input:
ex_spikes pA <- excitatory spike # <- CHANGE
in_spikes pA <- inhibitory spike # <- CHANGE
currents <- current
end
output: spike
update:
# spikes are applied automatically. computed initial values are used.
end
end
New syntax for the shapes as ODEs
neuron iaf_cond_alpha_implicit_old:
state:
V_m mV = E_L ## membrane potential
g_in nS = 0nS ## inputs from the inh conductance
g_ex nS = 0nS ## inputs from the exc conductance
g_in' nS/ms = 0 ## inputs from the inh conductance
g_ex' nS/ms = 0 ## inputs from the exc conductance
end
equations:
g_in'' = (-1)/(tau_syn_in)**(2)*g_in+(-2)/tau_syn_in*g_in'
g_in' = g_in'
g_ex'' = (-1)/(tau_syn_ex)**(2)*g_ex+(-2)/tau_syn_ex*g_ex'
g_ex' = g_ex'
function I_syn_exc pA = cond_sum(g_ex, spikeExc) * ( V_m - E_ex )
function I_syn_inh pA = cond_sum(g_in, spikeInh) * ( V_m - E_in )
function I_leak pA = g_L * ( V_m - E_L )
V_m' = ( -I_leak - I_syn_exc - I_syn_inh + currents + I_e ) / C_m
end
internals:
PSConInit_E nS/ms = nS * e / tau_syn_ex
PSConInit_I nS/ms = nS * e / tau_syn_in
end
input:
spikeInh <- inhibitory spike
spikeExc <- excitatory spike
currents <- current
end
output: spike
update:
# ...
g_ex' += spikeExc * PSConInit_E
g_in' += spikeInh * PSConInit_I
end
end
neuron iaf_cond_alpha_implicit_new:
initial_values: # <- CHANGE
V_m mV = E_L
g_in real = 0nS # <- CHANGE
g_ex real = 0nS # <- CHANGE
g_in' 1/ms = e / tau_syn_in # <- CHANGE
g_ex' 1/ms = e / tau_syn_ex # <- CHANGE
end
equations:
shape g_in'' = (-1)/(tau_syn_in)**(2)*g_in+(-2)/tau_syn_in*g_in' # <- CHANGE
shape g_in' = g_in' # <- CHANGE
shape g_ex'' = (-1)/(tau_syn_ex)**(2)*g_ex+(-2)/tau_syn_ex*g_ex' # <- CHANGE
shape g_ex' = g_ex' # <- CHANGE
function I_syn_exc pA = convolve(g_ex, spikeExc) * ( V_m - E_ex )
function I_syn_inh pA = convolve(g_in, spikeInh) * ( V_m - E_in )
function I_leak pA = g_L * ( V_m - E_L )
V_m' = ( -I_leak - I_syn_exc - I_syn_inh + currents + I_e ) / C_m
end
input:
spikeInh nS <- inhibitory spike
spikeExc nS <- excitatory spike
currents <- current
end
output: spike
update:
# variables responsible for modeling of `convolve(g_ex, spikeExc)`
# are updated automatically here
end
end
@PTraeder Any progress?
It is going rather slower than I expected. I have to fight the type system at every step.
And sorry for the long response time, the notification was burried in MC spam.
The problems I ran into stemmed from me wanting to adapt the type system to still make a distinction between BUFFER
, PRIMITIVE
and UNIT
TypeSymbol kinds. This proved to be painful since, among other problems, the type system is often queried with nothing but the name of the type resulting in an ambivalence between a type such as ("integer",PRIMITIVE)
and ("integer",BUFFER)
.
I have since reworked my approach to completely get rid of the BUFFER
Kind for TypeSymbol
objects.
Since there is one instance where we have to use the TypeSymbol
to infer wether it denotes a buffer or a different type (namely when generating NEST code), the TypeSymbol
class has been extended with a boolean field denoting its origin in a buffer declaration, if applicable.
Sounds reasonable.
I support @jougs suggestion above and find the _new
examples by @DimitriPlotnikov much cleaner than the _old
ones.
Just one question: I am a bit confused by g_ex' = g_ex'
and similar as both sides of the equation are identical.
@heplesser
g_ex' = g_ex'
are needed for the integration with gsl. However, these equations can be inferred automatically and will be inferred in the future.
Done in #385.
We are planning to add typing to buffers. Current buffers are intended to carry the type
pA
. Concerning spike buffers, we would like to have feedback for types you consider probable/useful. For example, would you like to see just a specific subset of units? All of the "native" NEST units? Compound units such asmV/ms
? Primitive types?Additionally, your ideas about possible syntax for these types would be welcome.
Currently buffers are defined thusly:
My proposition, in accordance with suggestions made by @heplesser, would be to simply add the respective type after the buffer name. Similarly to variable definitions.
for a buffer of type ms.