Open machanic opened 5 years ago
Because it revealed in the paper L2 attack was the best, I only implemented L2 attack.
In my experiment, I found the attack successful rate is low in many pictures. Why the attack successful rate is lower than BIM(iterative FGSM) and PGD?
I tested the code only on CIFAR10, and the successful rate was quite high, at least much higher than FGSM. I chose FGSM as benchmark because I didn't have running code of other methods back then. I'm regret to say that I don't have time now to verify it's attack successful rate on other dataset, or to comprehensively tune the hyperparameters, because the adversarial example project was unfortunately suspended. Without evidence, I guess it might attribute to the fact that FGSM uses L1 norm whereas C&W (L2) uses L2 norm. The two metrics are not comparable. If you were using some L2 variant of FGSM, and ended with the result that iterative FGSM was better, it goes beyond my ability to answer.
Maybe the low successful rate is that because I use pytorch 0.4 and python3.6 version, which is not your tested version? because pytorch 0.4 eliminate the Variable class and modify other thing. I think you'd better test it under pytorch 0.4 and python3.6 version using ImageNet ILSVRC 2012 validation dataset (50000 pictures)
Does your test FGSM is the single-step version or Iterative steps version(also called Basic Iterative Method(BIM))? Because the original single-step version FGSM is perform worse than its multiple steps version.
It was the vanilla FGSM. I once attempted to migrate it to PyTorch 0.4.0, but failed. I guessed there were a number of pitfalls ... Certainly it's likely that there were bugs that shew up when one compared the result of FGSM with BIM, since I never tested it against BIM. That said, the project was unfortunately suspended. If you could help migrate the code to PyTorch 0.4.x I would be really appreciate it.
The paper describe another two types of attack: C&W L1 and L_infinity attack