Closed JackCaster closed 5 months ago
You can try to define the neuron as this:
import torch
from spikingjelly.activation_based import neuron
from matplotlib import pyplot as plt
class ABSThresholdLIFNode(neuron.SimpleLIFNode):
def neuronal_fire(self):
return self.surrogate_function(torch.abs(self.v) - self.v_threshold)
T = 64
x = torch.cat((0.4 * torch.ones([T//2]), -0.4 * torch.ones([T//2])))
net = ABSThresholdLIFNode(tau=100., decay_input=False)
v = []
s_t = []
for t in range(T):
s_t.append(net(x[t]) * t)
v.append(net.v)
fig = plt.figure()
plt.subplot(2, 1, 1)
plt.plot(torch.arange(T), x, label='input')
plt.plot(torch.arange(T), v, label='v')
plt.legend()
plt.subplot(2, 1, 2)
plt.eventplot(s_t, label='spike', colors='red')
plt.legend()
plt.show()
Thanks!
Issue type
SpikingJelly version
0.0.0.0.14
Description
I would like to have a LIF neuron that can spike when the potential crosses a positive (+1) or negative (-1) threshold. I think I got the a custom LIF neuron to work:
but my attempt to regress the membrane potential (by finding the
k
gain that is applied to the input current) failsand I suspect that the surrogate functions may not work as intended when the spike comes from a negative threshold. The training loss would keep oscillating without converging despite the learning rate (I do not have this problems with normal LIF neurons)
Do you know how I could get this to work?
Minimal code to reproduce the error/bug