MadryLab / robustness

A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.
MIT License
903 stars 181 forks source link

Custom Attacks #85

Open santosh-b opened 3 years ago

santosh-b commented 3 years ago

Is there any way to train with my own custom attacks using the library, such as FGSM?

andrewilyas commented 3 years ago

Hi @santosh-b, yes there definitely is, this is a common use case. For FGSM specifically, you don't need to use any custom code, you can just use an l-infinity attack with 1 step, and set step size equal to epsilon. If you need something more complicated like a new kind of projection or something, you will just need to subclass the generic AttackerStep class (https://github.com/MadryLab/robustness/blob/2dabf3bdd8057fdc0718b2f8d8d90d89b1a109df/robustness/attack_steps.py#L15), and then feed in your subclass as the constraint instead of "2" or "inf." You can look at the UnconstrainedStep and RandomStep subclasses for examples of how to do this.

Hope this helps!