CAS-CLab / Gated-LIF

GLIF: A Unified Gated Leaky Integrate-and-Fire Neuron for Spiking Neural Networks, NeurIPS 2022 Poster
MIT License
48 stars 8 forks source link

Bad operand question in backward training #3

Closed isoflurane closed 1 year ago

isoflurane commented 1 year ago

Hi, I'm trying train GLIF SNNs on static datasets, and the checkpoints-filename I used is "checkpoint_max_newest.pth", but encountered the problem with operands that did not conform to the python specification: In layers.py, after saved_tensers in line 48, input should be of type tuple, whereas later hu calculations require input to be a number, such as int or float, which results in a bad operand error. Looking forward to your answer, and thank you in advance.

image
Ikarosy commented 1 year ago

Emm,I am not sure what error you encountered since you have changed my original codes obviously, and maybe show me the error report will help ? (not sure)

I guess you can try creating a virtual environment with Anaconda and installing the PyTorch to my version as listed in the Requirements. Or, try using other surrogate gradient functions that would not trigger this error (according to my experiments, the surrogate gradient function would not affect the results pretty much). And, GLIF has already been added to SpikingJelly. You can just try using their latest SpikingJelly version, and set the neuron type to GLIF. And, use their surrogate gradient implementation.

Ikarosy commented 1 year ago

The latest NeurIPS'24 paper has further verified the reproducibility and the performance of GLIF Parallel Spiking Neurons with High Efficiency and Ability to Learn Long-term Dependencies . You can also try contacting them and discuss about these implementation details of surrogate gradients.

Ikarosy commented 1 year ago

Re-open this issue if further discussion is requested