Open YigitDemirag opened 5 years ago
Hi. This is indeed a major limitation in Brian2GeNN at the moment. I don't see a convenient solution right now (see #96 for a discussion of what we might do in the future), but if the total number of spikes that the PoissonGroup generates is not too big, then you could maybe do the following:
TimedArray
, the PoissonGroup
and a SpikeMonitor
and generate/record all the input spikes. For this, do not use Brian2GeNN, but instead the C++ standalone device or the default runtime mode.SpikeGeneratorGroup
where you plug in the spikes recorded with the spike monitor from the previous simulation.Before you do this, try to figure out an estimate of how many spikes the PoissonGroup
will generate. As a rough guideline, each recorded spike will take up 16 Bytes of memory, so on a system with 16GB RAM you'd want to stay well below one billion spikes.
A minor point: the simulation of the spikes should be a bit faster if you use:
input = NeuronGroup(img_l*img_l*nmult, 'rate : Hz', threshold='rand() < rate*dt', name='pin')
input.run_regularly('rate = stimulusMNIST(t,i%(28*28))', dt=stimDuration)
The NeuronGroup
is equivalent to a PoissonGroup
, but by using the run_regularly
operation you only look up the rate every 100ms (when it actually changes), instead of every time step.
I am currently trying to implement an MNIST classifier using brian2genn on GPU. My problem is that
TimedArray
is not supported by brian2genn and I can't come up with another solution that does not useTimedArrays
to input a dataset to the Network. Any suggestions?Example piece of code that works on CPU but not on GPU: