Harry24k / adversarial-attacks-pytorch

PyTorch implementation of adversarial attacks [torchattacks].
https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html
MIT License
1.78k stars 337 forks source link

[feature] Combining attacks to achieve SoTA #88

Open Framartin opened 1 year ago

Framartin commented 1 year ago

Some attacks implemented in torchattacks have better results when used in combinations with other attacks, as reported in their papers.

For example, the paper proposing DI2-FGSM achieves its best results when it is combined with MI-FGSM ("M-DI2-FGSM"). The same is true for Translation Invariance and SGM which are combined with DI2-FGSM and MI-FGSM.

One easy solution might be to add momentum to torchattacks.attacks.difgsm.DIFGSM and torchattacks.attacks.tifgsm.TIFGSM, and to add DI to torchattacks.attacks.tifgsm.TIFGSM. But multiple isolated copies of the implementation of each technique might be cumbersome and difficult to maintain at some point. Would it make sense to have a single super class which implement all of them at once? Or to have a generic BIM attack whose methods could be extended?

To illustrate the issue, I have compiled in the following table the combinations evaluated by five papers about adversarial examples transferability.

Technique Combinations of Techniques Evaluated
MI MI
GN GN, GN+MI
DI DI, DI+MI 
TI TI, TI+MI, TI+DI
SGM SGM, SGM+MI, SGM+DI, SGM+MI+DI
Harry24k commented 1 year ago

Thank you for your kind explanation. There are a few things that should be considered when merging more than two attacks:

  1. Merging two attacks with a single superclass makes the readers difficult to understand the attack and modify it.
  2. Arguments can be exponentially increased. Indeed, TIFGSM already has 10 arguments.

However, I agree with your suggestion that combining techniques is mandatory to achieve SOTA. Then, how about making a new class like UPGD for transferability? For this, the forward method in each class should be broken down into several class methods.