bethgelab / foolbox

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
https://foolbox.jonasrauber.de
MIT License
2.72k stars 422 forks source link

Questions about pytorch_resnet18.py #550

Closed obkwin closed 2 years ago

obkwin commented 4 years ago

I have some doubts and hope to get your reply.

  1. The images obtained by ep.astensors(samples(fmodel, dataset="imagenet", batchsize=5)) are only normalized by 255, i think the operation (images-mean)/std is needed before run attack();
  2. Some methods like DeepFool, CW do not need param epsilons, so we should set epsilons=[1.0] for them?
jonasrauber commented 4 years ago

1) For adversarial attacks, the preprocessing has to be part of the model. It's done here: https://github.com/bethgelab/foolbox/blob/master/examples/pytorch_resnet18.py#L11 In the line after that, you can see that the model input is specified as [0, 1]

2) In Foolbox 3, you should pass either epsilons = None if you want the adversarial example with the minimal adversarial perturbation (currently only works for attacks like DeepFool, CW, etc. that minimize the perturbations). For all attacks, you can always specify one or more explicit epsilons, and you will get the attack success rate for these epsilons (this works for attacks like PGD as well as attacks like DeepFool — and it will automatically run PGD multiple times and DeepFool only once).

Does this answer your questions?

zimmerrol commented 2 years ago

Closing due to inactivity.