Closed ChenLumen closed 3 years ago
Thanks for your interest in using GeNN, can you provide some more details about the model i.e. number of neurons and synapses, what sort of plasticity etc and I can help you estimate how much memory it'll require.
Ok, thanks. Our model have 13,839,150 neurons and 100,738,784,518 synapses totally, it's a very large model, and we hope it will be runing in the GeNN. And I think we needn't add plasticity to the model in the temporary.
Oh wow that is big! What sort of neuron model does it use, what sort of connectivity and are there any synaptic delays?
Yes, you know, Schmidt et al. work is based on the real bio data. Now, we have larger dataset so that we have larger model. Beause our work was started no more than, we use similar calculate method about synaptic delays from Schmidt et al. And we also use LIF neuron model(exponential postsynaptic currents), although we will improve it in the future. Oh, I also wonder if the single GPU's memory can't load my model, could GeNN supporte distributred processing in the multi-GPU like pytorch?
Ok, assuming the model structure is similar, the memory will be dominated by the delay buffers which uses 4 byte entries so (assuming max delay is still < 512 0.1ms timesteps) they will require 13,839,150 512 4 = 26gbyte. The remaining state is around 60 bytes per neuron meaning it will require 13,839,150 * 60 = 792mbyte. Basically, it's not going to fit on any desktop GPU but should be fine running on an HPC machine with at least 32 GB GPUs so V100 or A100 basically.
I'm afraid GeNN doesn't currently support distributed processing but you could try https://github.com/golosio/NeuronGPU
OK, thanks, we have V100 and A100 in HPC machine. Thanks for your estimate!
Great - please let me know how you get on!
Hi, we have a large model, and I read your paper run the large model about Schmidt et al. But my institution tell me they will offer the corresponding GPU after I calculate how much memory needed.