Closed cheese-leopard closed 1 year ago
Can you post a minimal working example of the above code here for us to execute?
code is as follows:
"""Issue 234."""
import random
import numpy as np
import snntorch as snn
import torch
SEED = 666
np.random.seed(SEED)
random.seed(SEED)
torch.manual_seed(SEED)
layer = snn.Synaptic(alpha=0.9, beta=0.8, threshold=1)
syn, mem = layer.init_synaptic()
x = torch.ones(10, 10) * 0.3
print("\n\nOutput of Synaptic with init_hidden=False\n")
for step in range(10):
spk, syn, mem = layer(x[step], syn, mem)
print(spk)
print("\n\nOutput of Synaptic with init_hidden=True\n")
layer2 = snn.Synaptic(alpha=0.9, beta=0.8, init_hidden=True, threshold=1)
x = torch.ones(10, 10) * 0.3
for step in range(10):
spk = layer2(x[step])
print(spk)
print("\n\nOutput of Leaky\n")
layer3 = snn.Leaky(beta=0.8, init_hidden=True, threshold=1)
x = torch.ones(10, 10) * 0.3
for step in range(10):
spk = layer3(x[step])
print(spk)
I can reproduce the behavior. The output is:
Output of Synaptic with init_hidden=False
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=
) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn= ) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn= ) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn= ) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn= ) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn= ) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn= ) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn= ) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn= ) Output of Synaptic with init_hidden=True
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=
) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) Output of Leaky
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=
) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= ) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn= )
I think this is a bug, but I would like to hear @jeshraghian 's opinion on that.
@be-Berserker , could you make a PR from this issue?
I made one in https://github.com/jeshraghian/snntorch/pull/235.
Description
I found that there seems to have a problem with the source code of snntorch's Synaptic class. In the following colab link, https://colab.research.google.com/drive/1ntRpP9q-oTeHvkY6TmcfjZ_DUwHzGIR_?usp=sharing I created a synaptic neuron with init Hidden=false / true respectively, but their outputs are different. When init hidden=true, synaptic neuron output is the same as class Leaky neuron output. I think the reason might be a code error in snntorch/_neurons/Synaptic.py Line 271, which will be rewritten as follows:
What I Did