Harry24k / adversarial-attacks-pytorch

PyTorch implementation of adversarial attacks [torchattacks]
https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html
MIT License
1.89k stars 351 forks source link

Universal Adversarial Perturbations #73

Open riiswa opened 2 years ago

riiswa commented 2 years ago

Hello,

I would like to implement the universal adversarial perturbations algorithm in this library. This library was designed to compute perturbed images, but this method returns a universal vector created from a set of images (The algorithm uses DeepFool). If you find this algorithm interesting, do you have any advice for me to adapt it to the architecture of this library?

Harry24k commented 2 years ago

Universal adversarial perturbation (UAP) can be adapted to any other methods such as DeepFool, FGSM, and PGD. To apply UAP, the size of perturbation should be (1, F, W, H) instead of (B, F, W, H), where B is the batch size. Moreover, for DeepFool, it might be more difficult because DeepFool currently uses _forward_indiv. I've been thinking about this problem as well, but haven't yet found a way to make the code cleaner.