yhhhli / SNN_Calibration

Pytorch Implementation of Spiking Neural Networks Calibration, ICML 2021
MIT License
79 stars 12 forks source link

Training Code #2

Open JominWink opened 2 years ago

JominWink commented 2 years ago

Hello, when I run the program, an error occurred "AttributeError: Can 't pickle local object' SubPolicy. Just set the < locals >. < lambda > '", don't know if you Can help me to solve it? Thank you very much!

yhhhli commented 2 years ago

Sounds like an error from autoaugment.

A quick solution would be, to avoid using autoaugment in your CIFAR data loader, you can set it to False. But it may not exactly reproduce the results from README.

Or if you want to import autoaugment, can you print your full log here? I am not able to see which line cause the error. Thanks.

JominWink commented 2 years ago

Hello, why is the accuracy of SNN using BN alone only about 46% on average?

yhhhli commented 2 years ago

Do you mean ANN trained with BN has low conversion accuracy?

We tried to analyze the difference between ANN w/ BN and ANN w/o BN, but we could not find any explicit difference. Their activation distribution looks similar. We can only say that ANN w/ BN has more activation mismatch during conversion, thus our calibration has more improvement.

JominWink commented 2 years ago

Yes, the effect of Light and Advanced calibration optimization is really obvious, and the accuracy of Light and Advanced calibration by adding Usebn is similar to the effect of the paper. The ANN conversion accuracy can reach 86.111% without useBN and any calibration, so the reason for the low conversion accuracy with UseBN is not clear.

yhhhli commented 2 years ago

The ANN conversion accuracy can reach 86.111% without useBN and any calibration, so the reason for the low conversion accuracy with UseBN is not clear.

Yes, the reason is unclear. We tried, but we could not figure it out. We just notice that, before our paper, no one use ANN w/ BN to do the conversion, we use our calibration to solve this problem but the cause is indeed unclear. Sorry about this.

The underlying reason could be a potential research topic.

yhhhli commented 2 years ago

Another comment: The ANN w/ BN has lower conversion accuracy in early time steps, but in higher time steps it has better performance than ANN w/o BN. So I'm sure that studying ANN-SNN w/ BN could be promising.

JominWink commented 2 years ago

Ok, the problem I first raised was that lambda could not be serialized, but when I run the code the other day, it seems to solve the problem automatically, so I can continue to discuss the problem with you now. As for the influence of Ann-to-SNN coding on accuracy, what do you think about coding? The constant encoding used in this paper is also not very clear for encoding as pulses.

JominWink commented 2 years ago

Ok, so for the talk where the time step is a little bit longer, I'm going to try to make the time step a little bit longer to see what happens.

yhhhli commented 2 years ago

Maybe you can try Poisson encoding with calibration, I think it can also improve plain Poisson encoding.

Personally speaking, I am not very fond of Poisson encoding. First, it is very inefficient on real hardware because you have to generate many random numbers, and second, people tend to convert it to ternary pulses (+1, -1, 0), and it drops too much performance.