putshua / ANN_SNN_QCFS

Code for paper "Optimal ANN-SNN conversion for high-accuracy and ultra-low-latency spiking neural networks"
19 stars 5 forks source link

How do you set L of QCFS to find ANN accuracy reported in your paper? #3

Open MusheerAbdullah opened 11 months ago

MusheerAbdullah commented 11 months ago

Hope you are doing well Sir, I have a question please, hope you could help me with it.

For example in "Table 2: Comparison between the proposed method and previous works on CIFAR-10 dataset" In VGG16 : ANN acc = 95.52%, What is "L" value in QCFS to get this accuracy ? How do you select "L" value? and when you test converted SNN model with T= (2,4,8,16,...) and get accuracy of (91.18, 93.96, 94.95, 95.40,... ), do you set their ANN counterparts based on L=T, then train, convert and simulate? or same ANN model with your optimal "L" value is simulated with different simulation time T in SNN?

Waiting for your response, thank you

putshua commented 11 months ago

Hello, I often set L=4 for cifar10, cifar100 and L=8 for ImageNet (Please refer to the paper for all parameter settings). There is a trade off between ANN accuracy and SNN low-timestep performance when choosing different L. Also, it is not necessary to set T=L during inference. I think these is more detailed discussion about L in the paper. Thanks for your attention.

MusheerAbdullah commented 11 months ago

Thank you so much for your response, I appreciate it ! Just want to know, the result in your paper says : ANN acc = 95.52%, on VGG16, Cifar10. do you get this result by setting L=4 or L=8 ? because I cannot get this accuracy at L=4, I can only get it when L=8 ? could you confirm, please

putshua commented 11 months ago

It's the result with L=4. I have reported the performance on my machine with the default settings and fixed seed in this repo. I am not sure why you cannot reproduce these results. Could you please download the lastest version and try again with all default settings? Please let me know if there is still unreproducable problem. Thanks.