brian-team / brian2genn

Brian 2 frontend to the GeNN simulator
http://brian2genn.readthedocs.io/
GNU General Public License v2.0
47 stars 16 forks source link

Working with datasets on GPU #95

Open YigitDemirag opened 5 years ago

YigitDemirag commented 5 years ago

I am currently trying to implement an MNIST classifier using brian2genn on GPU. My problem is that TimedArray is not supported by brian2genn and I can't come up with another solution that does not use TimedArrays to input a dataset to the Network. Any suggestions?

Example piece of code that works on CPU but not on GPU:

   # Create input to Network
    train_and_test = np.vstack([mnist_tr_in, mnist_te_in])
    # stimrate=100Hz, stimDuration=100ms
    stimulusMNIST = TimedArray(train_and_test*stimRate, dt=stimDuration)
    input = PoissonGroup(img_l*img_l*nmult,
                         rates='stimulusMNIST(t,i%(28*28))', name='pin')
    Network = Network(input)
mstimberg commented 5 years ago

Hi. This is indeed a major limitation in Brian2GeNN at the moment. I don't see a convenient solution right now (see #96 for a discussion of what we might do in the future), but if the total number of spikes that the PoissonGroup generates is not too big, then you could maybe do the following:

Before you do this, try to figure out an estimate of how many spikes the PoissonGroup will generate. As a rough guideline, each recorded spike will take up 16 Bytes of memory, so on a system with 16GB RAM you'd want to stay well below one billion spikes.

A minor point: the simulation of the spikes should be a bit faster if you use:

input = NeuronGroup(img_l*img_l*nmult, 'rate : Hz', threshold='rand() < rate*dt', name='pin')
input.run_regularly('rate = stimulusMNIST(t,i%(28*28))', dt=stimDuration)

The NeuronGroup is equivalent to a PoissonGroup, but by using the run_regularly operation you only look up the rate every 100ms (when it actually changes), instead of every time step.