Open mstimberg opened 7 years ago
I was about to start working on this, but now I have second thoughts whether allowing two Synapses
objects to have summed variables for the same post-synaptic variable is a good idea. The reason is that I think it is somewhat inconsistent with our syntax, if you declare
S = Synapses(sources, targets,'''
dg_syn/dt = -a*g_syn+b*x*(1-g_syn) : 1 (clock-driven)
g_post = g_syn : 1 (summed)''')
then you say that g_post
equals the sum over g_syn
, and therefore I feel it somewhat contradictory to have another g_post = ...
line in another Synapses
object saying the same thing, but expecting them to add up.
So, this would rather argue for raising an error in the case of two Synapses
objects targeting the same post-synaptic variable. However, I think there is a situation where it does make sense: imagine the post-synaptic group has two types of neurons, e.g. excitatory and inhibitory. You might then well have two Synapses
objects with summed variables that target non-overlapping subsets of the target population. This situation would be difficult to detect in the general case, and I think a user would be rightly annoyed if this did not work.
I also wondered whether some of the problem is not due to our syntax. E.g. if we wrote g_post += g_syn : 1
, instead of using a (summed)
flag, having more than one synapse targeting a variable would no longer be contradictory. On the other hand, it would give the wrong impression that the value of g_post
is increasing with every time step...
Any thoughts/ideas?
Looking into some old issues, I realized that this is actually a special-case of #126, i.e. if we had a general check that a variable is set twice without being read in between, than it would cover this situation. So I wonder whether we should have a special-purpose solution for summed variables before fixing #126? Also note that one of the issues I raised above also applies to #126. If we had a check for "overwrites", it would trigger in the following, legitimiate, use case:
group = NeuronGroup(10, 'v : 1')
e = group[:5]
i = group[5:]
e.run_regularly('v = 1')
i.run_regularly('v = 2')
From Brian's point of view, v
is the same variable in both cases, just indexed differently. In this simple example, we could potentially figure out that the two groups are not overlapping, but for all kind of dynamic indexing (reset statements, synapses, etc.) we could only do this with runtime checks. Maybe this would rather be part of something like a "debug" mode, i.e. runtime error checking (#499)?
Given that we have quite a few new features and that it is time for a new release (e.g. to publish Python 3.6 packages), I wonder whether we should not downgrade this issue to "add a remark to the Known Issues section"...
How about g_post = g_syn+h_syn : 1 (summed)
if you want to have two?
But I was never entirely happy with our summed syntax, so maybe if we could think of a more flexible/better one that would be even better?
In the short term, adding an error message not a bad idea?
Oh yeah, and also happy to mark this as known issue and not keep it as high priority.
As long as we do make it raise an error in the situation where something actually wrong happens I mean!
How about g_post = g_syn+h_syn : 1 (summed) if you want to have two?
I don't quite understand what you mean by that, I'm afraid...
As long as we do make it raise an error in the situation where something actually wrong happens I mean!
I don't see any nice solution for this now, but I think I'll implement a quick-and-dirty one (during Network preparation, go through all objects and check for multiple SummedStateUpdater
that target the same post-synaptic variable.
What I meant was at the moment we have g_post = g_syn : 1 (summed)
, but people want to have two separate pathways summing on to the same postsynaptic neuron then we could write that as g_post=g_syn+h_syn
to sum g
and h
? Maybe I misunderstood something.
Ah, no, you can already do that. The thing is that people want to use two Synapses
objects for some reason, something like this:
type1_synapses = Synapses(sources, targets,
'''dg_syn/dt = -a*g_syn+b*x*(1-g_syn) : 1 (clock-driven)
g_post = g_syn : 1 (summed)''')
type1_synapses.connect(0.5)
type2_synapses = Synapses(sources, targets,
'''dg_syn/dt = -a*g_syn+b*x*(1-g_syn) : 1 (clock-driven)
g_post = g_syn : 1 (summed)''')
type2_synapses.connect(0.5)
If two of those synapses target the same target cell, the latter assignment will overwrite the first instead of adding to it (but as I commented earlier, I'm not sure it would be semantically correct to simply add, given the use of =
).
Ah OK, well in that case I think we should just raise an error if people try to do this, and suggest they use an extra intermediate variable in the postsynaptic neuron group, i.e.
eqs = '''
...
g1 : 1
g2: 1
g = g1+g2 : 1
'''
targets = NeuronGroup(N, eqs, ...)
type1_synapses = Synapses(sources, targets,
'''dg_syn/dt = -a*g_syn+b*x*(1-g_syn) : 1 (clock-driven)
g1_post = g_syn : 1 (summed)''')
type1_synapses.connect(0.5)
type2_synapses = Synapses(sources, targets,
'''dg_syn/dt = -a*g_syn+b*x*(1-g_syn) : 1 (clock-driven)
g2_post = g_syn : 1 (summed)''')
type2_synapses.connect(0.5)
This would work, right?
Yes, this is what I am currently doing. The problem is that there is no way to detect this from within the Synapses
/SummedVariableUpdater
code, so I have to add a somewhat hardcoded check into Network.before_run
. Not ideal, but should work.
On the other hand, I'd like to allow:
targets = NeuronGroup(N, eqs, ...)
type1_synapses = Synapses(sources, targets[:N//2],
'''dg_syn/dt = -a*g_syn+b*x*(1-g_syn) : 1 (clock-driven)
g1_post = g_syn : 1 (summed)''')
type1_synapses.connect(0.5)
type2_synapses = Synapses(sources, targets[N//2:],
'''dg_syn/dt = -a*g_syn+b*x*(1-g_syn) : 1 (clock-driven)
g2_post = g_syn : 1 (summed)''')
type2_synapses.connect(0.5)
i.e., targeting non-overlapping subgroups, since there's no ambiguity in that case.
Hmm, there's no ambiguity (I assume you wanted to write g_post
for both?), but I think it's a clearer rule to explain that you can just never have more than one source for a summed variable than to try to work out a way of checking for overlaps. This could be a runtime property meaning we'd need to check it in generated code, meaning we'd have to add code for every target to check this.
Sorry, copy&paste error, it was supposed to be g_post
in both cases. I'd say explicit subgroups are ok, because we can check it in Python code. I think we should support this, because if you do something like:
exc = group[:N//2]
inh = group[N//2]
then you think of exc
and inh
as separate groups from then on (and we want to encourage doing this, instead of creating two separate NeuronGroup
s).
I agree that we should not bother with runtime checking, i.e. the above example but with something like S.connect('j > N/2', p=0.5)
should raise the error.
I see your point but I worry that it's a bit confusing to say to users that it works in the one case and not the other. The difference is logically clear, but requires a bit of understanding of our code generation pipeline.
I see your point but I worry that it's a bit confusing to say to users that it works in the one case and not the other.
Not sure, I could see it be confusing either way. If you think of exc.g_post
not being the "same variable" as inh.g_post
then it would be surprising if it did not work. Especially if someone merges two formerly separate NeuronGroup
s because we said to merge them if the equations are the same.
[The _post
suffix does not make much sense here, but I'm sure you get what I mean]
That's a good point. OK, objection withdrawn!
At the moment, updates of the same post-synaptic variable from two different objects using the
(summed)
mechanism will fail: since at the start of each summing, the post-synaptic variable is set to 0, only the second update will work. This is very confusing for users, since they don't even get a warning.With our current architecture of independent code objects, solving this problem is trickier than it sounds, since each
SummedVariableUpdater
is unaware of the other updaters. I guess the easiest solution is to separate the setting to zero from the summed update, and do it as a separate operation (in thebefore_start
time slot) -- this would have a specific name, so that for a second(summed)
variable we would detect that the initialization already exists and would not add another one.