-
On running command like:
`!python tangent_attack_hemisphere/attack.py --gpu 0 --norm l2 --dataset CIFAR-10 --arch resnet-50`
I get:
```
Traceback (most recent call last):
File "tangent_atta…
-
The current Randomized Smoothing is a generic method, that we use the averaged logits of samples from Gaussian distribution as the prediction result. However, according to [Certified Adversarial Robus…
-
When I debug the `PGDtraining`, I find that requires_grad of adversarial data is true.
Is that right? The input image may be not allowed to require grad.
https://github.com/DSE-MSU/DeepRobust/…
-
Thanks for releasing such a rigorous evaluation of existing works on adversarial defenses. It is immensely helpful to get more clarity on this topic. I wonder what is the criterion to add new models t…
-
When evaluating whether inputs are adversarial, the framework first checks whether the classification of the input matches the groundtruth label. If it does not, then it uses the detection mechanism t…
-
Dear Sir,
I just took my first step in scientific research focused on the algorithms of defense adversarial samples. Recently I read your paper 《Adversarial and Clean Data Are Not Twins》 .I think i…
-
It is a basic observation that when given strictly more power, the adversary should never do worse. However, in Table VII the paper reports that MNIST adversarial examples with their l_infinity norm c…
-
Thanks for your perfect work!
As a junior student, I have learned a lot from your work. And I would like to know a little more in depth.
In your source code file "white_attack.py" and "evaluate_ad…
-
Hi,
This is Bala. I have a query regarding adversarial attack.
Is there any adversarial attack that sustains/consists of added noise, after resize attack ? (adversarial image -> converting into …
-
It looks like the blur function in the blur defense and training code,
https://github.com/google-research/selfstudy-adversarial-robustness/blob/15d1c0126e3dbaa205862c39e31d4e69afc08167/training/train…