fangwei123456 / Parallel-Spiking-Neuron

40 stars 4 forks source link

Can we go specific on experimental settings? #1

Closed Ikarosy closed 1 year ago

Ikarosy commented 1 year ago

Hi, I tried PSN ResNet-20 on cifar10 with t=8. Pytorch official implementation but with ReLU substituted into PSN. 200 epochs. Cosine scheduler. But, the classification result is ~85%. Is it normal? Any advice on hyperparameters settings both on PSN or training would be appreciated.

Except for the most effective improvement in parallel computing and SNN performance, I also noticed PSN's representation space is larger than GLIF if the resetting mechanism is ignored. GLIF should be a subclass neuronal model of PSN. What an elegant parametric method! But, I wonder whether such a computation-efficient neuronal model would be accepted in the field of neurocomputing. Is it possible for the SNN researchers to fully embrace PSN to accelerate the research, and apply PSN to any circumstances where the traditional LIF is used?

If PSN can fully replace LIFs, I think this field could be more active because experiments can be done faster, and more ideas can show up with the emergence of PSN. I am happy to see but not certain about this view. If PSN can not fully replace LIFs, which means more SNN researchers are concerned, what obstacles would it be?

I am very curious about the author's view. Maybe we can talk on WeChat? or not?

Thanks for the novel and contributive work. I am expecting your reply.

fangwei123456 commented 1 year ago

the classification result is ~85%. Is it normal?

Yes. According to my experience, the accuracy of spiking resnet-20 structure is not high on CIFAR10. Thus, I perfer to use the PLIF net, which can be obtained from :

https://github.com/fangwei123456/spikingjelly/blob/bdc8cc9dd137d98fc210c2be5efade008f0dfd87/spikingjelly/activation_based/model/parametric_lif_net.py#L50

I suggest to remove the voting layer, as the PSN paper did.

Note that data augmentation is also required to get 95+ accuracy:

https://github.com/fangwei123456/spikingjelly/blob/bdc8cc9dd137d98fc210c2be5efade008f0dfd87/spikingjelly/activation_based/model/train_classify.py#L173

fangwei123456 commented 1 year ago

GLIF should be a subclass neuronal model of PSN

Yes, according to Eq.(8) from the arxiv v1 paper, vanilla spiking neurons with removing reset and linear neuronal charging functions are the subclass of the PSN.

Is it possible for the SNN researchers to fully embrace PSN to accelerate the research, and apply PSN to any circumstances where the traditional LIF is used?

I guess that other researchers are willing to use or modify the PSN family in their papers for higher training speed, which is also the reason why we add the PSN family to the SpikingJelly framework before we open-source the training codes, logs, and models (of cource, we will open source them as we did before).

I think the PSN family is easily to be implemented on chips because removing is always easier than adding neuronal dynamics.

I am very curious about the author's view. Maybe we can talk on WeChat? or not?

Let me send an email to contact you. I am also willing to discuss with you about adding the GLIF neuron to SpikingJelly.

Ikarosy commented 1 year ago

With pleasure, see you on WeChat.