arbor-sim / arbor

The Arbor multi-compartment neural network simulation library.
https://arbor-sim.org
BSD 3-Clause "New" or "Revised" License
108 stars 60 forks source link

Computing time (for network with synapses) scales unexpectedly with timestep and simulation time #1997

Open jlubo opened 2 years ago

jlubo commented 2 years ago

Maybe it's not a bug, but it's definitively unexpected: network simulations with long timesteps and long simulation time take much longer than such with short timesteps and short simulation time, even though the network does not produce spikes. I've found this to occur for network models with and without synaptic plasticity. In the trivial case where all synapses are removed, everything seems to run as expected.

The behavior can be reproduced with the code provided here (without plasticity). The file set_arbor_env will have to be adapted to the specific Arbor installation. The script build_and_run_net_test should run three simulations with short, medium, and long timesteps, respectively, on a single CPU core. On my local machine, I get the following output:

short:

meter                         time(s)      memory(MB)
-------------------------------------------------------------------------------------------
load-balance                    0.002           0.228
simulation-init                 3.925          29.548
simulation-run                  0.145           0.003
meter-total                     4.073          29.778

medium:

meter                         time(s)      memory(MB)
-------------------------------------------------------------------------------------------
load-balance                    0.002           0.227
simulation-init                 3.669          29.561
simulation-run                 65.701           0.003
meter-total                    69.372          29.791

long:

still running for hours...

I've been using Arbor version 0.6.1-dev, state of commit 8af6bd2 (including SDE computation as of commit 5d141aa, but don't know if that makes any difference for the given example).

schmitts commented 2 years ago

https://github.com/arbor-sim/arbor/blob/master/arbor/simulation.cpp#L304

sets the interval of the epochs according to the minimal delay (which is constant 3 ms in the example at hand).

schmitts commented 2 years ago

All we need to change in the example supplied by @jlubo is to set the synaptic delay >= the dt:

d0 = max(self.dt, self.syn_config["t_ax_delay"])
jlubo commented 2 years ago

For the record: Unfortunately, the suggestion by @schmitts doesn't solve the whole issue. We've found that there might be a general issue with long timesteps: 1. Arbor uses interpolation for computing exact spike times even for long time steps; 2. the stochastic solver is based on the Euler-Maruyama method, which does not converge if timesteps become too large. We will now try to implement an exchange of mechanisms for computing long timesteps with the sparse solver, and at the same suppress spike computation (which is not necessary then).

thorstenhater commented 1 year ago

So, has this changed on the current master w/ fixed-dt?

jlubo commented 1 year ago

I could solve the (additional) problem mentioned in my last post by using the workaround suggested above by @schmitts and exchanging the mechanism/recipe such that the stochastic solver is used for short timesteps and the sparse solver for long timesteps.

Regarding the general issue: it hasn't changed with the fixed-dt feature, presumably because here the update steps still depend on the synaptic delay by t_interval_ = min_delay()/2;.