Closed pgleeson closed 7 years ago
@pgleeson - ok fixed small bug when distributing cells of cellList and works ok now -- let me know; and thanks for finding bug
@salvadord, great, thanks. That works for me locally too.
Now another minor problem.. Simulations in NML2 using just chemical synapses used to produce identical behaviour when run in serial and parallel mode the first time I updated for the Random123 usage, now there are small differences.
Code is here https://github.com/OpenSourceBrain/NetPyNEShowcase/blob/master/NetPyNE/test/LEMS_SpikingNet_netpyne.py.
These are first trace from Sim_SpikingNet.pop_post.v.dat that's produced (cell 0 in post syn cell), run with 1,2,4 processors:
close up:
Presynaptic cells seem to be identical
@pgleeson - ok, took a while but figured what the issue is, although not sure exactly how to solve it:
problem is the poissonFiringSyns are stochastic so require an associated h.Random -- otherwise output is different (which is what was happening here).
however, in cell.py the h.Random is only created if params['originalFormat']=='NeuroML2_stochastic_input'
(https://github.com/Neurosim-lab/netpyne/blob/development/netpyne/cell.py#L1006), which only happens if this condition is met in neuromlFuncs.py if isinstance(input_comp_obj,neuroml.PoissonFiringSynapse)
(https://github.com/Neurosim-lab/netpyne/blob/development/netpyne/neuromlFuncs.py#L1033), which seems like is never met.
Also, there seems to be 2 small bugs in https://github.com/Neurosim-lab/netpyne/blob/development/netpyne/cell.py#L1013:
1) self.stims[-1] is accessed before appending the stim (line 1017)
2) ['NeuroML2_stochastic_input_rand'] should start with an 'h' so that it is removed before gathering (otherwise will crash)
Both of these can easily be fixed by replacing with:
params['hNeuroML2_stochastic_input_rand'] = rand
(I made this change locally to test it worked but didn't commit)
I also enforced format = 'NeuroML2_stochastic_input'
in neuromlFuncs.py to make sure an h.Random was associated to the stim.
With the above changes the output was identical when using different num of mpi nodes. However, I didn't commit since wasn't sure exactly how you wanted to handle the 'NeuroML2_stochastic_input' in neuromlFuncs.py
Let me know if this makes sense and if you need me to make any changes. thx
Thanks for looking into that @salvadord. I've made some updates based on your suggestions in https://github.com/Neurosim-lab/netpyne/commit/c9d00f954c22c150f962ff4616f6f57d2a585631, but I think this wasn't really the source of the error, I'm still seeing the small differences myself...
It shouldn't have fixed the problem anyway, because the Randoms were already being set up/initialised correctly, since the presynaptic population (receiving the spiking inputs) was identical on 1,2,4 hosts, it was just the slight difference in the post synaptic pop (just receiving syn input from pre pop).
I'll look into it more and see if I can narrow down where the issue is...
Ah sorry thought we were still looking at SimpleNet, will check SpikingNet.
However, I think the above still does apply to SimpleNet: I pulled your latest changes from neuroml_export and the h.Random for the syn where still not being created, so got different output for 1 vs 2 cores -- does it work ok for you?
Just checked LEMS_SpikingNet and the presyn pops in 1 vs 2 hosts are not identical for me
@pgleeson, FYI I was able to reproduce the spkingnet issue with just 2 stims, 2 presyn cells and 1 postsyn cell; I'm implementing the net directly in netpyne (without neuroml export) to check what is causing the discrepancy.
@salvadord thanks! I'll try to get more time to look into this more from my side too this week.
@pgleeson - I've been able to reproduce the issue with a minimal example in netpyne, without any neuroml, and using standard hh cells and exp2syn. But after many hours still can't figure out what is causing it. My next step is implementing the same thing directly in Neuron.
Thanks for spending some time to look into this @salvadord. I'm reasonably sure I was able to run some networks identically in serial and parallel mode before the Random123 refactor, but didn't test it very extensively.
I'm seeing differences with this example https://github.com/OpenSourceBrain/NetPyNEShowcase/blob/master/NeuroML2/times/LEMS_SpikingNet_netpyne.py between serial and np = 2 (pre pop has 3 cells, post has 2, 2 conns: pre0->post0, pre1-> post1)
But when I remove the 3rd cell in pre pop the difference vanishes...
@pgleeson - I tested a bunch of the tutorials and they produce identical output with 1 and 2 cores, even with the new Random123. And yeah, I started from the LEMS_SpikingNet_netpyne.py example, rewrote it in netpyne that involved no NeuroML, and simplified it step by step until I was able to reproduce the issue with a minimal model (just 2 pre and 1 pop cells) that used standard HH neurons and Exp2syn. I've pushed the code to the sandbox: https://github.com/Neurosim-lab/netpyne/blob/development/examples/sandbox/sandbox.py Will look into it more this week.
@pgleeson - Finally figured it out!!
So it turns out that that gid_connect modifies the threshold of the source cell but only if the cell is on the same node. This leads to different outputs if running on 1 vs 2 cores.
You can see a minimal example using just Neuron in the sandbox: https://github.com/Neurosim-lab/netpyne/blob/development/examples/sandbox/sandbox.py -- notice the threshold of cell with gid=1 is different if you run on 1 vs 2 cores.
A related issue was discussed in this forum post: https://www.neuron.yale.edu/phpBB/viewtopic.php?f=31&t=2355. Seems like a possible solution would be to never use the threshold of the postyn cell netcon (via pc.gid_connect) and instead use the one of the presyn cell netcon (via pc.cell). I discussed with Robert, and emailed Mike Hines about it.
For now I changed netpyne so the threshold in conns is not used; and so you can provide a 'threshold' param in cellParams eg. 'secs': {'soma': {'threshold': 5.0}
.
I also made the previously mentioned minor change in neuromlFuncs.py so that format = 'NeuroML2_stochastic_input'
is enforced and an h.Random is associated with the stim.
After these 2 changes the LEMS_SpikingNet example produces identical output in diff num of cores (finally!).
I'll wait to see if you want to introduce any changes in neuromlFuncs.py and then will release version Saturday night or Sunday morning.
Great news! Have made some updates to use threshold on the sections, and opened a PR. Please include in the release :-)
I've also put back the check on 'NeuroML2_stochastic_input'; there are two types of NML 'cells', one is a spike source and one is a cell with v and they have to be treated slightly differently. All my tests are passing now. This change shouldn't affect anything outside the NML world...
If I try to run the model here: https://github.com/OpenSourceBrain/NetPyNEShowcase/blob/master/NetPyNE/test/LEMS_SimpleNet_netpyne.py
in serial or with np1:
it works fine. But with
it throws an error (using my branch):
It looks like the cells aren't being properly recorded: cell_1 (in pop1) is going into Volts_file__RS_pop_RS_pop_0_soma_v for pop0, even though the following conditions are specified: