brian-team / brian2cuda

A brian2 extension to simulate spiking neural networks on GPUs
https://brian2cuda.readthedocs.io/
GNU General Public License v3.0
61 stars 12 forks source link

Why the volatile GPU utilization is so low? I run the same code respectively on the CPU(cpp_standalone) and GPU(brian2cuda), and i expect that it can run quickly by using brian2ccuda. However, it runs much slower... Can someone explain this? #313

Closed LiYuan-0709 closed 8 months ago

LiYuan-0709 commented 8 months ago

微信图片_20240301215921 Runing on CPU set_device('cpp_standalone') image

Runing on GPU, set_device("cuda_standalone") image

mstimberg commented 8 months ago

Hi @LiYuan-0709, could you please open a discussion thread on https://brian.discourse.group instead of this issue? This looks indeed very slow, but it might be simply be because of your model or the way your model is set up? Without any further details (about the model and your GPU) we cannot really look into this. From the numbers on the CPU, it seems you are running a very small network for a very long time? This would be the worst-case scenario for a GPU, it needs many neurons/synapses to simulate in parallel. See our paper for more details: https://www.frontiersin.org/articles/10.3389/fninf.2022.883700/full