gralliry / Adversarial-Attack-Generation-Techniques

Method of Generating Adversarial Examples By Pytorch in CIFAR10: L-BFGS, FGSM, I-FGSM, MI-FGSM, DeepFool, C&W, JSMA, ONE-PIXEL, UPSET
5 stars 1 forks source link

A question about MI-FGSM #2

Open chenchenczy opened 1 week ago

chenchenczy commented 1 week ago

I found that when running the MI-FGSM code, the final accuracy is independent of the decay_factor parameter, which leads to the results of MI-FGSM and I-FGSM always being the same. How can this issue be resolved?Is there a problem with the implementation of the code? I set epsilon=0.1, iters=10, and alpha=1.

gralliry commented 4 days ago

Thank for your issue! There may be something wrong in mi_fgsm.py You can view (https://github.com/dongyp13/Non-Targeted-Adversarial-Attacks/blob/master/attack_iter.py) line 184

# attack/mi_fgsm.py line60
                ...
                grad = pert_image.grad.sign()
                grad = self.decay_factor * grad + grad / torch.norm(grad, p=1)
                pert_image = pert_image + alpha * torch.sign(grad)

Change to the following code:

                ...
                grad = pert_image.grad
                grad = self.decay_factor * grad + grad / torch.norm(grad, p=1)
                pert_image = pert_image + alpha * torch.sign(grad)

I haven't tried this modification yet to prove if it's correct, you can try it. Have a good code