bic-L / Masked-Spiking-Transformer

[ICCV-23] Masked Spiking Transformer
24 stars 1 forks source link

Incorrect validation performance on imagenet with pretrained model with 0.75 mask ratio #5

Closed Xzk7 closed 9 months ago

Xzk7 commented 9 months ago

Thanks for your excellent work. We ran the command "torchrun --nproc_per_node=8 main.py --cfg configs/mst/MST.yaml --batch-size 128 --snnvalidate True --dataset imagenet --pretrained ckpt/imagenet_mr_0.75.pth --sim_len 128", then achieved the output accuracy tensor(with size 128) full of 0.001, we did not modify the raw code. Thank you again for your work and time. Look forward to your reply, thank you!

bic-L commented 9 months ago

Hi,

We have re-uploaded the checkpoint for Imagenet and updated the readme file. To evaluate performance with a 0.75 masking ratio, please use this command instead:

"torchrun --nproc_per_node=8 main.py --cfg configs/mst/MST.yaml --batch-size 128 --snnvalidate True --dataset imagenet --pretrained ckpt/imagenet_mr_0.75.pth --sim_len 128 --masking_ratio 0.75"

Please let me know if this helps.

Xzk7 commented 9 months ago

Hi,

We have re-uploaded the checkpoint for Imagenet and updated the readme file. To evaluate performance with a 0.75 masking ratio, please use this command instead:

"torchrun --nproc_per_node=8 main.py --cfg configs/mst/MST.yaml --batch-size 128 --snnvalidate True --dataset imagenet --pretrained ckpt/imagenet_mr_0.75.pth --sim_len 128 --masking_ratio 0.75"

Please let me know if this helps.

OK, I miss "--masking_ratio 0.75", thanks for your help!