MA_SNN
├── /DVSGestures/
│ ├── /data/
│ │ ├── DVS_Gesture.py
│ │ └── DvsGesture.tar.gz
eg:
python Att_SNN_CNN.py
MA_SNN
├── /CIFAR10DVS/
│ ├── /data/
│ │ ├── /airplane/
│ │ | ├──0.mat
│ │ | ├──1.mat
│ │ | ├──...
│ │ ├──automobile
│ │ └──...
eg:
python Att_SNN.py
Download [DVSGait Dataset] and put the downloaded dataset to /MA_SNN/DVSGait/data.
Change the values of T and dt in /MA_SNN/DVSGait/CNN/Config.py then run the tasks in /MA_SNN/DVSGait.
eg:
python Att_SNN_CNN.py
We adopt the MS-SNN (https://github.com/Ariande1/MS-ResNet) as the residual spiking neural network backbone.
eg:
python -m torch.distributed.launch --master_port=[port] --nproc_per_node=[node_num] train_amp.py -net [model_type] -b [batchsize] -lr [learning_rate]
The implementation of Att-VGG-SNN in https://github.com/ridgerchu/SNN_Attention_VGG
/module/Attention.py defines the Attention layer and /module/LIF.py,LIF_Module.py defines LIF module.
The CSA-MS-ResNet104 model is available at https://pan.baidu.com/s/1Uro7IVSerV23OKbG8Qn6pQ?pwd=54tl (Code: 54tl).
@ARTICLE{10032591,
author={Yao, Man and Zhao, Guangshe and Zhang, Hengyu and Hu, Yifan and Deng, Lei and Tian, Yonghong and Xu, Bo and Li, Guoqi},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
title={Attention Spiking Neural Networks},
year={2023},
volume={45},
number={8},
pages={9393-9410},
doi={10.1109/TPAMI.2023.3241201}}