Open Framartin opened 1 year ago
Hi @Framartin , long time no see , 🤪 hh, I have same question here https://github.com/Harry24k/adversarial-attacks-pytorch/issues/123, and the Harry's explanation:
First, the random start of PGDL2 is followed by some previous work (but I cannot remember the source code. Sorry). To my knowledge, there is no standard method to initialize the random noise at the beginning. I think there are several works to investigate the importance of random noise.
Oh, and could you also please fix the low PGDL2 attack success rate https://github.com/Harry24k/adversarial-attacks-pytorch/issues/142 ? I cannot find out what the problem is. 🥲
Thanks for your prompt reply! I was busy preparing and defending my PhD thesis during the last few months 😄
PGDL2
does sample inside the L2 ball of radius epsilon (i.e., the L2 norm of the random vector is less than epsilon). But the issue is that it does not sample uniformly in the L2 ball. The behaviour does not correspond to the original Madry's article, which samples uniformly in the feasible space.
✨ Short description of the bug [tl;dr]
The current implementation of the random start of
PGDL2
does not sample uniformly in the L2 ball (as done by the original paper).Currently,
PGDL2
first samples a random directions (normalized vector), and then samples a radius uniformly between 0 and 1. This sampling scheme does not sample uniformly in the L2 ball. Under the current sampling, the probability of sampling in the orange ring, illustrated by the figure below, is the same as the probability of the blue ring. Therefore, the probability is not proportional to the area.This bug can be an issue, since for high dimensional balls, the probability concentrates on the outer sphere, i.e., the expected value of the L2 norm of a random perturbation tends to epsilon, when the number of dimensions grows to infinity.
I should be able to work on resolving this issue in the following days.
💬 Detailed code and results
The code from
torchattacks/attacks/pgdl2.py
from line 58: