MadryLab / cifar10_challenge

A challenge to explore adversarial robustness of neural networks on CIFAR10.
MIT License
488 stars 133 forks source link

The config of training robust model for CIFAR10. #27

Closed ylsung closed 4 years ago

ylsung commented 4 years ago

In this repository, the PGD attack applied in training the robust model was using 7 steps of size 2 shown in the paper or using 10 steps of size 2 shown in the config? Thank you.

dtsip commented 4 years ago

7 steps, as mentioned in the paper. The default config is just an example.

Note that when you download models with python fetch_model there is a config.json inside the resulting folder that contains the exact parameters used.