ylsung / pytorch-adversarial-training

PyTorch-1.0 implementation for the adversarial training on MNIST/CIFAR-10 and visualization on robustness classifier.
242 stars 65 forks source link

More information about the updated checkpoint of pgd trained Madry's model on cifar-10 #8

Closed renqibing closed 3 years ago

renqibing commented 3 years ago

Hi, first thanks for your great work!

I wanna know more information about the updated checkpoint of pgd trained Madry's model on cifar-10. Was this checkpoint stored when the whole 76000 iterations were down? I ran PGD-20 attack to your trained model and the accuracy is 50.05% while it's 47.04% reported in the leaderboard from Madrylab's cifar-10 challenge. Is there any possible reason for such a difference?

Thanks for your attention. Looking forward to your reply.

ylsung commented 3 years ago

Sorry for the late reply.

Yes. I use the last saved checkpoints. According to the accuracy difference, there is a lot of randomness will effect the trained output, such as weight initialization, data order and the randomness in PGD attack (during training), so I guess the difference is come from some of these factors. To get a stable result, maybe you can train multiple models with different seeds and compute the average accuracy on them.

ylsung commented 3 years ago

BTW, in Madry's implementation, they normalize the inputs with mean and std. This may also cause the difference.

ylsung commented 3 years ago

Close because of inactivity.