fangwei123456 / Spike-Element-Wise-ResNet

Deep Residual Learning in Spiking Neural Networks
Mozilla Public License 2.0
140 stars 21 forks source link

Will you fit your code into your latest Spikingjelly? #21

Open AmperiaWang opened 1 year ago

AmperiaWang commented 1 year ago

I'm sorry, but some of your code is not compatible with your latest Spikingjelly framework. For example,

from spikingjelly.cext.neuron import MultiStepParametricLIFNode

has been invalid and I found the replacement in activation_based.neuron.ParametricLIFNode. Would you give your code a long-term support by updating your code? I'll appreciate it.

fangwei123456 commented 1 year ago

According to readme.md, you can install the specific version:

The origin codes uses a specific SpikingJelly. To maximize reproducibility, the user can download the latest SpikingJelly and rollback to the version that we used to train:

git clone https://github.com/fangwei123456/spikingjelly.git cd spikingjelly git reset --hard 2958519df84ad77c316c6e6fbfac96fb2e5f59a3 python setup.py install

AmperiaWang commented 1 year ago

First I thank you for answering my dumb question. Your code now has been running well on my comuter. However, there is still a question mark in me towards your paper. You have presented your AND function, which act as an AND gate or multiply function for spike train A and spike train S. However, according to math calculation, the spike rate of output is less than either A's or S's. As the net goes deeper, it will generate less spikes thus cause the accuracy lower. The experiment proved this, while I saw the accuracy decreased after few epochs. Therefore, I think you must have been used some tricks on your code. Can you answer me for this question?

fangwei123456 commented 1 year ago

As the net goes deeper, it will generate less spikes thus cause the accuracy lower. The experiment proved this, while I saw the accuracy decreased after few epochs.

Yes, and you can find that I used ADD for the ImageNet in the original paper.