idiap / sparch

PyTorch based toolkit for developing spiking neural networks (SNNs) by training and testing them on speech command recognition tasks
20 stars 4 forks source link

Training is too slow for SNN! #1

Closed RMalikM closed 1 year ago

RMalikM commented 1 year ago

Hi, Thanks for the wonderful work. I am using sparch for speech emotion recognition task. I observed that during the training, backprop takes a lot of time even in a 3 layer SNN. I used single NVIDIA RTX A4000 GPU with 16GB VRAM and also trained on NVIDIA RTX 3060 for training. Is there a way to reduce the training time, specifically during backpropagation?

@Kanma @alexbittar

alexbittar commented 1 year ago

Hello, It is normal for SNN training to take longer than ANN training due to the fact that standard (non-spiking) RNNs in Pytorch take advantage of CUDA implementations that run faster on GPUs. Such implementations could also be written for SNNs but it is not yet the case. If you could be more specific about how long you mean by "a lot of time" and the type of data you are using, I might be able to help you more.

RMalikM commented 1 year ago

@alexbittar I am using audio wav files of duration 5-10 seconds sampled at 16khz. There are around 1600 samples in my training set and for 1 epoch it takes around 30 minutes with a 3 layers SNN. I used the same features as used in SpeechCommands. Is there any ways to speed-up the training?

alexbittar commented 1 year ago

You could potentially use CNN layers first with some time pooling to reduce the number of time steps, and then apply SNN layers. You could also simply try to use larger batch sizes. The real solution would be to write some CUDA code (similar to nn.RNN or nn.LSTM) that would directly accelerate the computations, but that's unfortunately not something I can help you with myself. Hope you can still make use of these networks.

RMalikM commented 1 year ago

Thanks @alexbittar for the suggestions. I will look into it.

Closing this issue for now.