Harry24k / adversarial-attacks-pytorch

PyTorch implementation of adversarial attacks [torchattacks].
https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html
MIT License
1.79k stars 338 forks source link

can the model (attacked) be trained with image augmentation? #45

Closed hehaodele closed 2 years ago

hehaodele commented 2 years ago

Hi Harry,

Thanks for your awesome lib. When I use your code, I find that the attack module requires the input images to be in the range of [0,1]. Does it mean that the model to be attacked has to be trained with the input with a range of [0,1]? What if I have a model that is trained with images that are augmented? Is there a way to include the image transformation into the attack process?

Best, Hao

Harry24k commented 2 years ago

Hi Hao!

The answer is no. You can train the model with torchattacks using a normalization layer. Please refer to here or torchdefenses

Sincerely, Harry

hehaodele commented 2 years ago

Thanks, Harry for your prompt answer.