RobustBench / robustbench

RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]
https://robustbench.github.io
Other
664 stars 99 forks source link

How to run Square Attack with more iterations? #193

Closed tuningManBin closed 2 weeks ago

tuningManBin commented 2 months ago

Hi, In my work, I have found that Square Attack are more effective than white box attacks. Can we conclude that there is a gradient confusion issue within the model? How can I further increase the number of iterations for Square Attack to observe if the model will be attacked to 0%? Here are the running results: Files already downloaded and verified Clean accuracy: 87.70% setting parameters for standard version using standard version including apgd-ce, apgd-t, fab-t, square. initial accuracy: 87.70% apgd-ce - 1/7 - 2 out of 128 successfully perturbed apgd-ce - 2/7 - 1 out of 128 successfully perturbed apgd-ce - 3/7 - 2 out of 128 successfully perturbed apgd-ce - 4/7 - 2 out of 128 successfully perturbed apgd-ce - 5/7 - 2 out of 128 successfully perturbed apgd-ce - 6/7 - 4 out of 128 successfully perturbed apgd-ce - 7/7 - 2 out of 109 successfully perturbed robust accuracy after APGD-CE: 86.20% (total time 3058.9 s) apgd-t - 1/7 - 26 out of 128 successfully perturbed apgd-t - 2/7 - 30 out of 128 successfully perturbed apgd-t - 3/7 - 22 out of 128 successfully perturbed apgd-t - 4/7 - 29 out of 128 successfully perturbed apgd-t - 5/7 - 27 out of 128 successfully perturbed apgd-t - 6/7 - 24 out of 128 successfully perturbed apgd-t - 7/7 - 14 out of 94 successfully perturbed robust accuracy after APGD-T: 69.00% (total time 30577.6 s) fab-t - 1/6 - 0 out of 128 successfully perturbed fab-t - 2/6 - 0 out of 128 successfully perturbed fab-t - 3/6 - 0 out of 128 successfully perturbed fab-t - 4/6 - 0 out of 128 successfully perturbed fab-t - 5/6 - 0 out of 128 successfully perturbed fab-t - 6/6 - 0 out of 50 successfully perturbed robust accuracy after FAB-T: 69.00% (total time 56743.5 s) square - 1/6 - 13 out of 128 successfully perturbed square - 2/6 - 15 out of 128 successfully perturbed square - 3/6 - 16 out of 128 successfully perturbed square - 4/6 - 10 out of 128 successfully perturbed square - 5/6 - 19 out of 128 successfully perturbed square - 6/6 - 5 out of 50 successfully perturbed robust accuracy after SQUARE: 61.20% (total time 62989.3 s) Warning: Square Attack has decreased the robust accuracy of 7.80%. This might indicate that the robustness evaluation using AutoAttack is unreliable. Consider running Square Attack with more iterations and restarts or an adaptive attack. See flags_doc.md for details. max Linf perturbation: 0.03137, nan in tensor: 0, max: 1.00000, min: 0.00000 robust accuracy: 61.20% Adversarial accuracy: 61.20%

fra31 commented 2 weeks ago

Hi,

sorry for the late reply. You can set the number of iterations similar to here, after the attack has been instantiated. I'd suggest also to use in this case resc_schedule=True (see here the parameters of the attack).

Yeah, I'd say that if Square Attack is more effective than the white-box methods, there might be some form of gradient masking.

tuningManBin commented 2 weeks ago

Your suggestion has been incredibly helpful. I really appreciate your efforts in maintaining the ranking list! :)