jeshraghian / snntorch

Deep and online learning with spiking neural networks in Python
https://snntorch.readthedocs.io/en/latest/
MIT License
1.28k stars 217 forks source link

Source code of snntorch.Synaptic seems to have a mistake? #234

Closed cheese-leopard closed 1 year ago

cheese-leopard commented 1 year ago

Description

I found that there seems to have a problem with the source code of snntorch's Synaptic class. In the following colab link, https://colab.research.google.com/drive/1ntRpP9q-oTeHvkY6TmcfjZ_DUwHzGIR_?usp=sharing I created a synaptic neuron with init Hidden=false / true respectively, but their outputs are different. When init hidden=true, synaptic neuron output is the same as class Leaky neuron output. I think the reason might be a code error in snntorch/_neurons/Synaptic.py Line 271, which will be rewritten as follows:

What I Did

**SOURCE CODE:**
    def _base_state_function_hidden(self, input_):
        base_fn_syn = self.alpha.clamp(0, 1) * self.syn + input_
        base_fn_mem = self.beta.clamp(0, 1) * self.mem + input_
        return base_fn_syn, base_fn_mem
**CORRECT CODE I THINK**
    def _base_state_function_hidden(self, input_):
        base_fn_syn = self.alpha.clamp(0, 1) * self.syn + input_
        base_fn_mem = self.beta.clamp(0, 1) * self.mem + base_fn_syn
        return base_fn_syn, base_fn_mem
ahenkes1 commented 1 year ago

Can you post a minimal working example of the above code here for us to execute?

cheese-leopard commented 1 year ago

code is as follows:

"""Issue 234."""
import random

import numpy as np
import snntorch as snn
import torch

SEED = 666
np.random.seed(SEED)
random.seed(SEED)
torch.manual_seed(SEED)

layer = snn.Synaptic(alpha=0.9, beta=0.8, threshold=1)
syn, mem = layer.init_synaptic()
x = torch.ones(10, 10) * 0.3
print("\n\nOutput of Synaptic with init_hidden=False\n")
for step in range(10):
    spk, syn, mem = layer(x[step], syn, mem)
    print(spk)

print("\n\nOutput of Synaptic with init_hidden=True\n")
layer2 = snn.Synaptic(alpha=0.9, beta=0.8, init_hidden=True, threshold=1)
x = torch.ones(10, 10) * 0.3
for step in range(10):
    spk = layer2(x[step])
    print(spk)

print("\n\nOutput of Leaky\n")
layer3 = snn.Leaky(beta=0.8, init_hidden=True, threshold=1)
x = torch.ones(10, 10) * 0.3
for step in range(10):
    spk = layer3(x[step])
    print(spk)
ahenkes1 commented 1 year ago

I can reproduce the behavior. The output is:

Output of Synaptic with init_hidden=False

tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn=) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn=) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn=) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn=) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn=) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn=) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn=) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn=)

Output of Synaptic with init_hidden=True

tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=)

Output of Leaky

tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=) tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], grad_fn=)

I think this is a bug, but I would like to hear @jeshraghian 's opinion on that.

ahenkes1 commented 1 year ago

@be-Berserker , could you make a PR from this issue?

cheese-leopard commented 1 year ago

I made one in https://github.com/jeshraghian/snntorch/pull/235.