Harry24k / adversarial-attacks-pytorch

PyTorch implementation of adversarial attacks [torchattacks].
https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html
MIT License
1.79k stars 338 forks source link

Question about adversarial training #48

Closed Buhua-Liu closed 2 years ago

Buhua-Liu commented 2 years ago

If we specify the attack used in adversarial training out of the training loop as in the MNIST adversarial training demo, will the attack model parameters be updated along with the training?

Harry24k commented 2 years ago

Yes, because the model will be passed by reference when the attack is initialized. If you want to fix the model during the training (for example, adversarial training with the black-box adversarial images), you can use copy.deepcopy.