-
Thanks for your excellent work and I have three questions:
1. I notice that _robustness_ does not support the MNIST dataset now. Do you consider adding it in the future?
2. When dealing with MNIS…
-
Thank You for creating such important challenge. Can you Please share the dataset download link?
-
Hello, I ran you code and got a result, but there is a big difference between the result and the accuracy in readme.My target model is vgg13 ,the accuracy in testset is 88% and 95% in trainset.The res…
-
In your experiment for Cifar10, the l_inf perturbed model has a perturbation size of 4/255 = 0.0157. The achieved robust accuracy achieved in their paper (https://arxiv.org/abs/1805.12152) is 57.79% a…
-
In Alg1 of your [paper](https://openreview.net/attachment?id=BJx040EFvH&name=original_pdf), you described PGD Adversarial training but you only train your network with PGD attacked images. On the othe…
-
Is there a pytorch definition and pytorch model weights of the architecture used for the white board leaderboard?
We would like to try our attack on your challenge but unfortunately our code is writ…
-
Hi,
I have a question about the network architecture. As you commented in [https://github.com/MadryLab/cifar10_challenge/blob/master/model.py#L50](https://github.com/MadryLab/cifar10_challenge/blob…
-
I load the pre-trained model from https://github.com/MadryLab/cifar10_challenge, and set up the environment as ReadMe.md. But cleverhans/examples/madry_lab_challenges/cifar10/attack_model.py cannot r…
-
Is there any adversarial attack that sustains/consists of added noise, after resize attack ? (adversarial image -> converting into High / low resolution image -> resize to original adverarial image si…
-
Why is the accuracy logged out twice to tensorboard for the adversarial samples in https://github.com/MadryLab/cifar10_challenge/blob/master/train.py#L76. It seems that these are just duplicates.