dyoony / SRST_AWR

Repository of Enhancing Adversarial Robustness in Low-Label Regime via Adaptively Weighted Regularizationand Knowledge Distillation
MIT License
1 stars 0 forks source link

Sharing teacher model checkpoints. #1

Open KVGandikota opened 4 months ago

KVGandikota commented 4 months ago

Hi,

Could you share checkpoints for the teacher model? We find it difficult to obtain the accuracies reported in the paper, perhaps due to the teacher being not trained to the same accuracy as reported.

dyoony commented 4 months ago

Hi, you can train the teacher model using train_teacher.py. How different is the performance of your trained teacher model compared to teacher model printed paper?

KVGandikota commented 4 months ago

Hi, thanks for getting back. I used train_teacher.py. For 1000 labels on CIFAR10 dataset, WRN 28-5 teacher is trained to accuracy of 92.82. Using this trained teacher and SRST_AWR I get a standard test accuracy 82.19, PGD20 accuracy of 48.73, and Autoattack accuracy of 46.0. The exact values for 1000 labels is not reported in the paper, instead the graph in Figure 1 of paper indicates autoattack accuracy greater than 50%.

KVGandikota commented 4 months ago

I also do not get the reported values for SVHN and STL10 as well. For SVHN, the wrn-28-2 teacher is trained to an accuracy of 96.6 and wrn-28-5 teacher model for STL10 to an accuracy of 87.9. Both are using strictly 1000 labels.

dyoony commented 3 months ago

Thank you for detailed response. I found that the total epochs in train_teacher.py should be 1000, not 500. 500 is the number for fast implementation. The model for reproducing the results in this paper will be updated in next week. Thank you for your attention to my paper. Feel free to ask any further questions!!