MadryLab / cifar10_challenge

A challenge to explore adversarial robustness of neural networks on CIFAR10.
MIT License
488 stars 133 forks source link

About the accuracy of adversarial examples #10

Closed lith0613 closed 5 years ago

lith0613 commented 5 years ago

I download the two 'secret model' from the web url in fetch_model.py, and load the model weights. When I use the adversarial examples generated from my own method, I found the test accuracy of the naturally_trained model is even better than the accuracy of adv_trained model. I don't know why that happens, can you give some explanation ?

dtsip commented 5 years ago

This can happen if you don't construct strong enough adversarial examples. I would recommend comparing your attack to one of the strong baselines already included in the repo.