brian-team / brian2genn

Brian 2 frontend to the GeNN simulator
http://brian2genn.readthedocs.io/
GNU General Public License v2.0
46 stars 16 forks source link

Performance drop in `master` and `benchmarking` branch for networks with postsynaptic effects #64

Closed denisalevi closed 6 years ago

denisalevi commented 6 years ago

I have recently updated brian2CUDA to be compatible with brian2 master and in order to compare performance against brian2GeNN I updated brian2GeNN to the benchmarking branch and GeNN to tag 3.0.0, see brian-team/brian2cuda#144 (before I used the brian2GeNN 1.0-benchmark branch with GeNN 2.2.2). I am observing a drastic performance decrease (up to factor 5) on some benchmarks between the two versions. This seems to apply only to models where presynaptic spikes modify postsynaptic variables. For models which only effect synaptic variables the performance seems not effected. And even the standard event driven STDP example looks ok to me (even though it does apply effects to postsynaptic variables).

Below is a speed test result from our Brunel Hakim benchmark (which is basically the brian2 example). The x axis are the number of neurons, the y axis the time in s (only for the codeobjects which are run every time step, no compilation or synapse creation etc). The versions used for both measurements are:

green plot: brian2genn: brian-team/brian2genn@8c6da48b3ae (currently tip of benchmarking branch, same results with current master branch) genn-team/genn@3b794457b8195 (tag 3.0.0) brian-team/brian2@320a5f6b3 (currently tip of master)

blue plot: brian2genn: 0553cafe (currently tip of 1.0-benchmark branch) genn-team/genn@e01c85f183 (tag 2.2.2) brian-team/brian2@fadc6a0ae (somewhere after version 2.1)

genn_figures I observe the same performance drop for the CUBA and COBAHH examples.

brian2GeNN doesn't work for me with the newer GeNN versions. With both, 3.1.0 and 3.1.1 I get that double definition of isinf error:

.../GeNNworkspace/magicnetwork_model_CODE/support_code.h(40): error: more than one instance of
overloaded function "isinf" matches the argument list:
            function "isinf(double)"                  
            function "std::isinf(double)"             
            argument types are: (double)              

The solution you implemented in PR #61 did not work for me in brian2CUDA either and produces the same error (see brian-team/brian2cuda#125).

So currently the only version that works with brian2 master (which brian2CUDA works with now) has this performance problem.

tnowotny commented 6 years ago

I assume @mstimberg will get this anyway, but can I also pull in @neworderofjamie for an opinion whether anything and what might have changed on the GeNN side. I am currently traveling and will not be able to do detective work atm.

neworderofjamie commented 6 years ago

GeNN's changed a lot in that time as, I imagine has Brian2GeNN so it's very hard to tell. A diff of the generated code can be a good way to tell if GeNN's started doing something different though.

On Tue, 14 Aug 2018, 19:04 Thomas Nowotny, notifications@github.com wrote:

I assume @mstimberg https://github.com/mstimberg will get this anyway, but can I also pull in @neworderofjamie https://github.com/neworderofjamie for an opinion whether anything and what might have changed on the GeNN side. I am currently traveling and will not be able to do detective work atm.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/brian-team/brian2genn/issues/64#issuecomment-412963473, or mute the thread https://github.com/notifications/unsubscribe-auth/AGeoGmTM9XW5gN9OECMpiXgEoWLmqUTyks5uQxE2gaJpZM4V8jd2 .

thesamovar commented 6 years ago

I think what you're looking for is https://git-scm.com/docs/git-bisect. :)

denisalevi commented 6 years ago

Alright, I found the problem... I almost couldn't reproduce my results from above... what a pain... looks like it was me after all, nothing in GeNN or brian2GeNN.

I got rid of the lastupdate variable when its not needed to save memory (see PR brian-team/brian2#979). Since those changes are not merged yet (and might not get merged as lastupdate might be removed entirely for not event-driven synapses, see PR discussion), I just add those changes through a diff file, which I apply when I check out a new brian2 version. But that diff file is tracked by our brian2cuda repo. And for the blue plot above, those changed were not included in the diff file.

I ran the brian2 COBAHH example modified for N=1e5 neurons with set_device('genn'), no monitor and with print device._last_run_time at the end of the script (needs the changes from my PR #65 to work). With the lastupdate variable last_run_time is ~2s, without its ~10s.

Here is a diff of the generated code. My guess would be it has to do with the missing convert_dynamic_arrays_2_sparse_synapses calls?

Anyways. If the lastupdate variable gets removed, there needs to be some change in brian2genn I guess. :)

diff of generated code

```diff diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/code_objects/synapses_1_synapses_create_generator_codeobject.cpp GeNNworkspace_slow/code_objects/synapses_1_synapses_create_generator_codeobject.cpp --- GeNNworkspace_fast/code_objects/synapses_1_synapses_create_generator_codeobject.cpp 2018-08-15 15:31:43.582184617 +0200 +++ GeNNworkspace_slow/code_objects/synapses_1_synapses_create_generator_codeobject.cpp 2018-08-15 15:21:00.006339188 +0200 @@ -386,7 +386,6 @@ _dynamic_array_synapses_1__synaptic_post.resize(newsize); _dynamic_array_synapses_1__synaptic_pre.resize(newsize); _dynamic_array_synapses_1_delay.resize(newsize); - _dynamic_array_synapses_1_lastupdate.resize(newsize); // Also update the total number of synapses _ptr_array_synapses_1_N[0] = newsize; diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/code_objects/synapses_synapses_create_generator_codeobject.cpp GeNNworkspace_slow/code_objects/synapses_synapses_create_generator_codeobject.cpp --- GeNNworkspace_fast/code_objects/synapses_synapses_create_generator_codeobject.cpp 2018-08-15 15:31:43.618185280 +0200 +++ GeNNworkspace_slow/code_objects/synapses_synapses_create_generator_codeobject.cpp 2018-08-15 15:21:00.230343308 +0200 @@ -386,7 +386,6 @@ _dynamic_array_synapses__synaptic_post.resize(newsize); _dynamic_array_synapses__synaptic_pre.resize(newsize); _dynamic_array_synapses_delay.resize(newsize); - _dynamic_array_synapses_lastupdate.resize(newsize); // Also update the total number of synapses _ptr_array_synapses_N[0] = newsize; diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/magicnetwork_model_CODE/definitions.h GeNNworkspace_slow/magicnetwork_model_CODE/definitions.h --- GeNNworkspace_fast/magicnetwork_model_CODE/definitions.h 2018-08-15 15:31:53.450366374 +0200 +++ GeNNworkspace_slow/magicnetwork_model_CODE/definitions.h 2018-08-15 15:26:56.020889812 +0200 @@ -87,13 +87,9 @@ extern double * inSynsynapses; extern double * d_inSynsynapses; extern SparseProjection Csynapses; -extern double * lastupdatesynapses; -extern double * d_lastupdatesynapses; extern double * inSynsynapses_1; extern double * d_inSynsynapses_1; extern SparseProjection Csynapses_1; -extern double * lastupdatesynapses_1; -extern double * d_lastupdatesynapses_1; #define Conductance SparseProjection /*struct Conductance is deprecated. diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/magicnetwork_model_CODE/runner.cc GeNNworkspace_slow/magicnetwork_model_CODE/runner.cc --- GeNNworkspace_fast/magicnetwork_model_CODE/runner.cc 2018-08-15 15:31:53.470366742 +0200 +++ GeNNworkspace_slow/magicnetwork_model_CODE/runner.cc 2018-08-15 15:26:56.132891874 +0200 @@ -72,9 +72,6 @@ __device__ unsigned int *dd_indInGsynapses; unsigned int *d_indsynapses; __device__ unsigned int *dd_indsynapses; -double * lastupdatesynapses; -double * d_lastupdatesynapses; -__device__ double * dd_lastupdatesynapses; double * inSynsynapses_1; double * d_inSynsynapses_1; __device__ double * dd_inSynsynapses_1; @@ -83,9 +80,6 @@ __device__ unsigned int *dd_indInGsynapses_1; unsigned int *d_indsynapses_1; __device__ unsigned int *dd_indsynapses_1; -double * lastupdatesynapses_1; -double * d_lastupdatesynapses_1; -__device__ double * dd_lastupdatesynapses_1; //------------------------------------------------------------------------- /*! \brief Function to convert a firing probability (per time step) @@ -221,8 +215,6 @@ Csynapses.remap= NULL; deviceMemAllocate(&d_indInGsynapses, dd_indInGsynapses, 100001 * sizeof(unsigned int)); deviceMemAllocate(&d_indsynapses, dd_indsynapses, Csynapses.connN * sizeof(unsigned int)); - cudaHostAlloc(&lastupdatesynapses, Csynapses.connN * sizeof(double), cudaHostAllocPortable); - deviceMemAllocate(&d_lastupdatesynapses, dd_lastupdatesynapses, Csynapses.connN * sizeof(double)); } void createSparseConnectivityFromDensesynapses(int preN,int postN, double *denseMatrix){ @@ -240,8 +232,6 @@ Csynapses_1.remap= NULL; deviceMemAllocate(&d_indInGsynapses_1, dd_indInGsynapses_1, 100001 * sizeof(unsigned int)); deviceMemAllocate(&d_indsynapses_1, dd_indsynapses_1, Csynapses_1.connN * sizeof(unsigned int)); - cudaHostAlloc(&lastupdatesynapses_1, Csynapses_1.connN * sizeof(double), cudaHostAllocPortable); - deviceMemAllocate(&d_lastupdatesynapses_1, dd_lastupdatesynapses_1, Csynapses_1.connN * sizeof(double)); } void createSparseConnectivityFromDensesynapses_1(int preN,int postN, double *denseMatrix){ @@ -252,10 +242,8 @@ size_t size; size = Csynapses.connN; initializeSparseArray(Csynapses, d_indsynapses, d_indInGsynapses,100000); -CHECK_CUDA_ERRORS(cudaMemcpy(d_lastupdatesynapses, lastupdatesynapses, sizeof(double) * size , cudaMemcpyHostToDevice)); size = Csynapses_1.connN; initializeSparseArray(Csynapses_1, d_indsynapses_1, d_indInGsynapses_1,100000); -CHECK_CUDA_ERRORS(cudaMemcpy(d_lastupdatesynapses_1, lastupdatesynapses_1, sizeof(double) * size , cudaMemcpyHostToDevice)); } void initmagicnetwork_model() @@ -295,8 +283,6 @@ CHECK_CUDA_ERRORS(cudaFree(d_indInGsynapses)); CHECK_CUDA_ERRORS(cudaFreeHost(Csynapses.ind)); CHECK_CUDA_ERRORS(cudaFree(d_indsynapses)); - CHECK_CUDA_ERRORS(cudaFreeHost(lastupdatesynapses)); - CHECK_CUDA_ERRORS(cudaFree(d_lastupdatesynapses)); CHECK_CUDA_ERRORS(cudaFreeHost(inSynsynapses_1)); CHECK_CUDA_ERRORS(cudaFree(d_inSynsynapses_1)); Csynapses_1.connN= 0; @@ -304,8 +290,6 @@ CHECK_CUDA_ERRORS(cudaFree(d_indInGsynapses_1)); CHECK_CUDA_ERRORS(cudaFreeHost(Csynapses_1.ind)); CHECK_CUDA_ERRORS(cudaFree(d_indsynapses_1)); - CHECK_CUDA_ERRORS(cudaFreeHost(lastupdatesynapses_1)); - CHECK_CUDA_ERRORS(cudaFree(d_lastupdatesynapses_1)); } void exitGeNN(){ diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/magicnetwork_model_CODE/runnerGPU.cc GeNNworkspace_slow/magicnetwork_model_CODE/runnerGPU.cc --- GeNNworkspace_fast/magicnetwork_model_CODE/runnerGPU.cc 2018-08-15 15:31:53.474366816 +0200 +++ GeNNworkspace_slow/magicnetwork_model_CODE/runnerGPU.cc 2018-08-15 15:26:56.136891947 +0200 @@ -63,14 +63,12 @@ void pushsynapsesStateToDevice() { size_t size = Csynapses.connN; - CHECK_CUDA_ERRORS(cudaMemcpy(d_lastupdatesynapses, lastupdatesynapses, size * sizeof(double), cudaMemcpyHostToDevice)); CHECK_CUDA_ERRORS(cudaMemcpy(d_inSynsynapses, inSynsynapses, 100000 * sizeof(double), cudaMemcpyHostToDevice)); } void pushsynapses_1StateToDevice() { size_t size = Csynapses_1.connN; - CHECK_CUDA_ERRORS(cudaMemcpy(d_lastupdatesynapses_1, lastupdatesynapses_1, size * sizeof(double), cudaMemcpyHostToDevice)); CHECK_CUDA_ERRORS(cudaMemcpy(d_inSynsynapses_1, inSynsynapses_1, 100000 * sizeof(double), cudaMemcpyHostToDevice)); } @@ -118,14 +116,12 @@ void pullsynapsesStateFromDevice() { size_t size = Csynapses.connN; - CHECK_CUDA_ERRORS(cudaMemcpy(lastupdatesynapses, d_lastupdatesynapses, size * sizeof(double), cudaMemcpyDeviceToHost)); CHECK_CUDA_ERRORS(cudaMemcpy(inSynsynapses, d_inSynsynapses, 100000 * sizeof(double), cudaMemcpyDeviceToHost)); } void pullsynapses_1StateFromDevice() { size_t size = Csynapses_1.connN; - CHECK_CUDA_ERRORS(cudaMemcpy(lastupdatesynapses_1, d_lastupdatesynapses_1, size * sizeof(double), cudaMemcpyDeviceToHost)); CHECK_CUDA_ERRORS(cudaMemcpy(inSynsynapses_1, d_inSynsynapses_1, 100000 * sizeof(double), cudaMemcpyDeviceToHost)); } diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/magicnetwork_model_CODE/synapseFnct.cc GeNNworkspace_slow/magicnetwork_model_CODE/synapseFnct.cc --- GeNNworkspace_fast/magicnetwork_model_CODE/synapseFnct.cc 2018-08-15 15:31:53.654370132 +0200 +++ GeNNworkspace_slow/magicnetwork_model_CODE/synapseFnct.cc 2018-08-15 15:26:56.184892831 +0200 @@ -32,7 +32,6 @@ using namespace synapses_weightupdate_simCode; addtoinSyn = (6.00000000000000079e-09); inSynsynapses[ipost] += addtoinSyn; -lastupdatesynapses[Csynapses.indInG[ipre] + j] = t; } } } @@ -48,7 +47,6 @@ using namespace synapses_1_weightupdate_simCode; addtoinSyn = (6.70000000000000044e-08); inSynsynapses_1[ipost] += addtoinSyn; -lastupdatesynapses_1[Csynapses_1.indInG[ipre] + j] = t; } } } diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/magicnetwork_model_CODE/synapseKrnl.cc GeNNworkspace_slow/magicnetwork_model_CODE/synapseKrnl.cc --- GeNNworkspace_fast/magicnetwork_model_CODE/synapseKrnl.cc 2018-08-15 15:31:53.486367037 +0200 +++ GeNNworkspace_slow/magicnetwork_model_CODE/synapseKrnl.cc 2018-08-15 15:26:56.148892168 +0200 @@ -48,7 +48,6 @@ ipost = dd_indsynapses[prePos]; addtoinSyn = (6.00000000000000079e-09); atomicAddSW(&dd_inSynsynapses[ipost], addtoinSyn); -dd_lastupdatesynapses[prePos] = t; } } @@ -93,7 +92,6 @@ ipost = dd_indsynapses_1[prePos]; addtoinSyn = (6.70000000000000044e-08); atomicAddSW(&dd_inSynsynapses_1[ipost], addtoinSyn); -dd_lastupdatesynapses_1[prePos] = t; } } diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/magicnetwork_model.cpp GeNNworkspace_slow/magicnetwork_model.cpp --- GeNNworkspace_fast/magicnetwork_model.cpp 2018-08-15 15:31:43.794188522 +0200 +++ GeNNworkspace_slow/magicnetwork_model.cpp 2018-08-15 15:21:00.546349120 +0200 @@ -66,13 +66,11 @@ // initial variables (synapses) // one additional initial variable for hidden_weightmatrix -double synapses_ini[2]= { - 0.0, +double synapses_ini[1]= { }; double *synapses_postsyn_ini= NULL; -double synapses_1_ini[2]= { - 0.0, +double synapses_1_ini[1]= { }; double *synapses_1_postsyn_ini= NULL; @@ -318,8 +316,6 @@ s.pNames.clear(); s.dpNames.clear(); // step 1: variables - s.varNames.push_back("lastupdate"); - s.varTypes.push_back("double"); // step 2: scalar (shared) variables s.extraGlobalSynapseKernelParameters.clear(); s.extraGlobalSynapseKernelParameterTypes.clear(); @@ -327,8 +323,7 @@ s.pNames.push_back("we"); // step 4: add simcode s.simCode= "$(addtoinSyn) = $(we);\n\ -$(updatelinsyn);\n\ -$(lastupdate) = t;"; +$(updatelinsyn);"; s.simLearnPost= ""; s.synapseDynamics= ""; s.simCode_supportCode= "\n\ @@ -449,8 +444,6 @@ s.pNames.clear(); s.dpNames.clear(); // step 1: variables - s.varNames.push_back("lastupdate"); - s.varTypes.push_back("double"); // step 2: scalar (shared) variables s.extraGlobalSynapseKernelParameters.clear(); s.extraGlobalSynapseKernelParameterTypes.clear(); @@ -458,8 +451,7 @@ s.pNames.push_back("wi"); // step 4: add simcode s.simCode= "$(addtoinSyn) = $(wi);\n\ -$(updatelinsyn);\n\ -$(lastupdate) = t;"; +$(updatelinsyn);"; s.simLearnPost= ""; s.synapseDynamics= ""; s.simCode_supportCode= "\n\ diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/main.cpp GeNNworkspace_slow/main.cpp --- GeNNworkspace_fast/main.cpp 2018-08-15 15:31:43.866189848 +0200 +++ GeNNworkspace_slow/main.cpp 2018-08-15 15:21:00.634350738 +0200 @@ -89,14 +89,8 @@ // translate to GeNN synaptic arrays allocatesynapses(brian::_dynamic_array_synapses__synaptic_pre.size()); vector > _synapses_bypre; - convert_dynamic_arrays_2_sparse_synapses(brian::_dynamic_array_synapses__synaptic_pre, brian::_dynamic_array_synapses__synaptic_post, - brian::_dynamic_array_synapses_lastupdate, Csynapses, lastupdatesynapses, - 100000, 100000, _synapses_bypre, b2g::FULL_MONTY); allocatesynapses_1(brian::_dynamic_array_synapses_1__synaptic_pre.size()); vector > _synapses_1_bypre; - convert_dynamic_arrays_2_sparse_synapses(brian::_dynamic_array_synapses_1__synaptic_pre, brian::_dynamic_array_synapses_1__synaptic_post, - brian::_dynamic_array_synapses_1_lastupdate, Csynapses_1, lastupdatesynapses_1, - 100000, 100000, _synapses_1_bypre, b2g::FULL_MONTY); initmagicnetwork_model(); // copy variable arrays @@ -146,9 +140,7 @@ // translate GeNN arrays back to synaptic arrays - convert_sparse_synapses_2_dynamic_arrays(Csynapses, lastupdatesynapses, 100000, 100000, brian::_dynamic_array_synapses__synaptic_pre, brian::_dynamic_array_synapses__synaptic_post, brian::_dynamic_array_synapses_lastupdate, b2g::FULL_MONTY); - convert_sparse_synapses_2_dynamic_arrays(Csynapses_1, lastupdatesynapses_1, 100000, 100000, brian::_dynamic_array_synapses_1__synaptic_pre, brian::_dynamic_array_synapses_1__synaptic_post, brian::_dynamic_array_synapses_1_lastupdate, b2g::FULL_MONTY); // copy variable arrays diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/objects.cpp GeNNworkspace_slow/objects.cpp --- GeNNworkspace_fast/objects.cpp 2018-08-15 15:31:43.358180495 +0200 +++ GeNNworkspace_slow/objects.cpp 2018-08-15 15:20:59.618332052 +0200 @@ -56,13 +56,11 @@ std::vector _dynamic_array_synapses_1__synaptic_post; std::vector _dynamic_array_synapses_1__synaptic_pre; std::vector _dynamic_array_synapses_1_delay; -std::vector _dynamic_array_synapses_1_lastupdate; std::vector _dynamic_array_synapses_1_N_incoming; std::vector _dynamic_array_synapses_1_N_outgoing; std::vector _dynamic_array_synapses__synaptic_post; std::vector _dynamic_array_synapses__synaptic_pre; std::vector _dynamic_array_synapses_delay; -std::vector _dynamic_array_synapses_lastupdate; std::vector _dynamic_array_synapses_N_incoming; std::vector _dynamic_array_synapses_N_outgoing; @@ -400,19 +398,6 @@ { std::cout << "Error writing output file for _dynamic_array_synapses_1_delay." << endl; } - ofstream outfile__dynamic_array_synapses_1_lastupdate; - outfile__dynamic_array_synapses_1_lastupdate.open("results/_dynamic_array_synapses_1_lastupdate_6875119916677774017", ios::binary | ios::out); - if(outfile__dynamic_array_synapses_1_lastupdate.is_open()) - { - if (! _dynamic_array_synapses_1_lastupdate.empty() ) - { - outfile__dynamic_array_synapses_1_lastupdate.write(reinterpret_cast(&_dynamic_array_synapses_1_lastupdate[0]), _dynamic_array_synapses_1_lastupdate.size()*sizeof(_dynamic_array_synapses_1_lastupdate[0])); - outfile__dynamic_array_synapses_1_lastupdate.close(); - } - } else - { - std::cout << "Error writing output file for _dynamic_array_synapses_1_lastupdate." << endl; - } ofstream outfile__dynamic_array_synapses_1_N_incoming; outfile__dynamic_array_synapses_1_N_incoming.open("results/_dynamic_array_synapses_1_N_incoming_-5364435978754666149", ios::binary | ios::out); if(outfile__dynamic_array_synapses_1_N_incoming.is_open()) @@ -478,19 +463,6 @@ { std::cout << "Error writing output file for _dynamic_array_synapses_delay." << endl; } - ofstream outfile__dynamic_array_synapses_lastupdate; - outfile__dynamic_array_synapses_lastupdate.open("results/_dynamic_array_synapses_lastupdate_562699891839928247", ios::binary | ios::out); - if(outfile__dynamic_array_synapses_lastupdate.is_open()) - { - if (! _dynamic_array_synapses_lastupdate.empty() ) - { - outfile__dynamic_array_synapses_lastupdate.write(reinterpret_cast(&_dynamic_array_synapses_lastupdate[0]), _dynamic_array_synapses_lastupdate.size()*sizeof(_dynamic_array_synapses_lastupdate[0])); - outfile__dynamic_array_synapses_lastupdate.close(); - } - } else - { - std::cout << "Error writing output file for _dynamic_array_synapses_lastupdate." << endl; - } ofstream outfile__dynamic_array_synapses_N_incoming; outfile__dynamic_array_synapses_N_incoming.open("results/_dynamic_array_synapses_N_incoming_6651214916728133133", ios::binary | ios::out); if(outfile__dynamic_array_synapses_N_incoming.is_open()) diff -ur '-x*.o' -xresults -xtags -xmain GeNNworkspace_fast/objects.h GeNNworkspace_slow/objects.h --- GeNNworkspace_fast/objects.h 2018-08-15 15:31:43.362180565 +0200 +++ GeNNworkspace_slow/objects.h 2018-08-15 15:20:59.622332125 +0200 @@ -25,13 +25,11 @@ extern std::vector _dynamic_array_synapses_1__synaptic_post; extern std::vector _dynamic_array_synapses_1__synaptic_pre; extern std::vector _dynamic_array_synapses_1_delay; -extern std::vector _dynamic_array_synapses_1_lastupdate; extern std::vector _dynamic_array_synapses_1_N_incoming; extern std::vector _dynamic_array_synapses_1_N_outgoing; extern std::vector _dynamic_array_synapses__synaptic_post; extern std::vector _dynamic_array_synapses__synaptic_pre; extern std::vector _dynamic_array_synapses_delay; -extern std::vector _dynamic_array_synapses_lastupdate; extern std::vector _dynamic_array_synapses_N_incoming; extern std::vector _dynamic_array_synapses_N_outgoing; ```

denisalevi commented 6 years ago

@mstimberg Since lastupdate is removed now in brian2, this issue might arise again.