Closed obkwin closed 2 years ago
1) For adversarial attacks, the preprocessing has to be part of the model. It's done here: https://github.com/bethgelab/foolbox/blob/master/examples/pytorch_resnet18.py#L11
In the line after that, you can see that the model input is specified as [0, 1]
2) In Foolbox 3, you should pass either epsilons = None
if you want the adversarial example with the minimal adversarial perturbation (currently only works for attacks like DeepFool, CW, etc. that minimize the perturbations). For all attacks, you can always specify one or more explicit epsilons, and you will get the attack success rate for these epsilons (this works for attacks like PGD as well as attacks like DeepFool — and it will automatically run PGD multiple times and DeepFool only once).
Does this answer your questions?
Closing due to inactivity.
I have some doubts and hope to get your reply.
images
obtained byep.astensors(samples(fmodel, dataset="imagenet", batchsize=5))
are only normalized by 255, i think the operation(images-mean)/std
is needed before runattack()
;epsilons
, so we should setepsilons=[1.0]
for them?