Closed drasmuss closed 7 years ago
Pushed my WIP branch, so the proposed changes can be seen in 89b72d8ea24d44c182c8f111614bb09ecc7e89d3. Also, just noticed I snagged #1234 :tada:
The main downside of this is that it requires a copy of the signal being probed, which in the case of connection weights (one of the main places we do this) can take up quite a bit of memory.
Buuut the inconsistency is probably more of an issue than memory usage, so I say go ahead with a PR for this! As for the unit test, switching to synapse=None
makes total sense, probably should have done that in the test in the first place.
So, that would be my proposed solution. However, it is kind of unusual to make a change that breaks a formerly passing unit test, so I thought I'd give people a chance to weigh in.
I think this is fine. It's not uncommon to have test cases -- that go beyond what's documented -- end up broken and adjusted. It just serves as a reminder of the deeper consequences of this change, and is evidence of great test coverage.
That said, since whatever you were trying to do ran into this issue (that you then wanted fixed), it might be worth documenting this somewhere (if there is a sensible place), or at least writing an explicit test for the desired behaviour. A test would ensure that it is a part of the specification from here on, and that any changes would break your new test.
Here's a minimal example illustrating the issue:
The difference between
synapse=0
andsynapse=None
is that synapses introduce a one-step time delay (under the hood this is because they areupdate
operations, so they run "after" the simulation step). So we would expect bothinput_p
andoutput_p
to be delayed by one timestep above. But if we run it, we seefor
output_p
(delayed as we would expect), andfor
input_p
(no delay). We can confirm this by settingsynapse=None
on both probes, in which case we seeinput_p
is unchanged (no delay, but that is what we expect this time), andoutput_p
changes to(the delay disappears, again as we would expect).
So basically the issue is that the
input_p
should show a one-step time delay whensynapse=0
, but it doesn't.The reason is that
output_p
is built as a "connection probe", meaning that we copy the value ofneurons.output
to a new signal. This copy happens before the filterupdate
is applied (at the end of the timestep).input_p
, on the other hand, is a "signal probe" (it just contains a direct reference to the target signal, which in this case isneurons.input
). The probe signal and the filter signal are aliased, so that when the filterupdate
runs at the end of the timestep, it also updates the probed value. So foroutput_p
we're seeing the value of the target signal before the filter update, and forinput_p
we're seeing the value of the target signal after the filter update.It's a fairly easy fix, we just change
builder/probe.py:signal_probe
toHowever, this causes a unit test to fail,
test_learning_rules.py:test_dt_dependence
. That test was somewhat relying on the above behaviour, because signal probes are more dt independent when you don't have these one-step time delays. But I don't think the intention of the test was to check the dt independence of signal probes, it was to check the dt independence of learning rules. So if we just setsynapse=None
on that signal probe (removing that part of the test), it passes fine.So, that would be my proposed solution. However, it is kind of unusual to make a change that breaks a formerly passing unit test, so I thought I'd give people a chance to weigh in. You could perhaps make an argument that the current implementation is working as intended, and we shouldn't expect signal and connection probes to work the same way. But since that distinction is totally hidden from the frontend user, I think this change is for the better.