fangwei123456 / spikingjelly

SpikingJelly is an open-source deep learning framework for Spiking Neural Network (SNN) based on PyTorch.
https://spikingjelly.readthedocs.io
Other
1.4k stars 246 forks source link

Dynamic Threshold #564

Open KaiSUN1 opened 4 months ago

KaiSUN1 commented 4 months ago

I want to get adaptive threshold, but why the backpropagation threshold has no gradient, who can help me?

class ALIFNode(neuron.BaseNode): def init(self, tau: torch.Tensor, v_threshold: float = 1.0, *args, *kwargs): super().init(args, **kwargs) self.v_threshold = torch.nn.Parameter(torch.tensor(v_threshold, dtype=torch.float32, requires_grad=True)) if not isinstance(tau, torch.Tensor): tau = torch.tensor(tau, dtype=torch.float32) self.register_buffer('tau', tau) self.v_reset = torch.tensor(0.0) # Assuming a reset value, you might need to adjust this

def neuronal_charge(self, x: torch.Tensor):
    self.v = self.v + (x - (self.v - self.v_reset)) / self.tau

def neuronal_fire(self):
    # print(self.v_threshold)
    return self.surrogate_function(self.v - self.v_threshold)
frostylight commented 3 months ago

It seems that it works for me, you should provide the whole codes that can reproduce the bug.

class ALIFNode(neuron.BaseNode):
    def __init__(self, tau: float | torch.Tensor = 2., v_threshold: float = 1., *args, **kwargs) -> None:
        super().__init__(*args, **kwargs)

        self.v_threshold = nn.Parameter(torch.tensor(v_threshold, dtype=torch.float, requires_grad=True))
        tau = torch.tensor(tau, dtype=torch.float)
        self.register_buffer("tau", tau)

    def neuronal_charge(self, x: torch.Tensor):
        self.v = self.v + (x - (self.v - self.v_reset)) / self.tau

    def neuronal_fire(self):
        return self.surrogate_function(self.v - self.v_threshold)

torch.manual_seed(0)
an = ALIFNode()
print(an.v_threshold.grad)
result = an(torch.tensor(1))
result.backward()
print(an.v_threshold.grad)

the output is

None
tensor(-0.4200)