Python version: tested on python 3.9, 3.10, and 3.11
torch: 2.3.1, 2.3.1+cu121 and 2.4.0
Operating System: Windows 10, Ubuntu 22.04, Rocky 8.9
Description
I was doing the snnTorch Tutorial Lesson 3 about LIF Neurons in Feed Forward Networks.
In Chapter 2 a snn.Leaky-Neuron is created and its behaviour with constant current input is tested.
In a plot the membran potential is plotted.
It is explained, that a soft-reset mechanism is implemented.
In contrast to the implementation of the Neuron in Chapter 1.4 the reset subtracts \beta*U_threshold.
Therefore the membran potential should fall to ~0.2 and not to ~0.0
This does not happen for me.
I tested on Windows, Linux and Google Colab.
My colleague has a Mac and for him it works as explained and shown here:
https://raw.githubusercontent.com/jeshraghian/snntorch/master/docs/_static/img/examples/tutorial3/_static/snn.leaky_step.png
Description
I was doing the snnTorch Tutorial Lesson 3 about LIF Neurons in Feed Forward Networks. In Chapter 2 a snn.Leaky-Neuron is created and its behaviour with constant current input is tested. In a plot the membran potential is plotted. It is explained, that a soft-reset mechanism is implemented. In contrast to the implementation of the Neuron in Chapter 1.4 the reset subtracts \beta*U_threshold. Therefore the membran potential should fall to ~0.2 and not to ~0.0 This does not happen for me. I tested on Windows, Linux and Google Colab. My colleague has a Mac and for him it works as explained and shown here: https://raw.githubusercontent.com/jeshraghian/snntorch/master/docs/_static/img/examples/tutorial3/_static/snn.leaky_step.png
What I Did
Ran all cells in snntorch_tutorial_3.ipynb
I get this
I expect this: