Closed lepangdan closed 6 years ago
We are not performing random restarts in our code. We are simply starting PGD from a different random point each time. That is, instead of starting PGD from x_nat
we are moving to a random point within the epsilon L_infty ball and performing PGD from there. Could you please elaborate on your question?
@dtsip I just want to confirm if I need to random restarts, because I saw the paper mentioned the random restarts while it is not implemented in the code. Another point I want to confirm is whether the PGD the same as Iterative-FGSM, except for adding random noise in the original image and random restarts?:D
You don't need random restarts to train robust models. The code already implements random start by adding noise before running PGD. Simply running train.py
as-is will train a robust model. (Random restarts were used in the paper for evaluation purposes only).
Yes, iterative-FGSM is the same as PGD starting from the original image. PGD is a standard method for constrained optimization used in numerous places over a long period of time. We are not the ones introducing this method. In the context of adversarial examples, the L_infinity version of PGD has been referred to as IFGSM.
I saw the line
x = x_nat + np.random.uniform(-self.epsilon, self.epsilon, x_nat.shape)
in functionperturb
in classLinfPGDAttack
for adding random noise to original image, while there is not code for random restarting point. I am not sure if random restart step can be omitted.