fra31 / auto-attack

Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
https://arxiv.org/abs/2003.01690
MIT License
639 stars 111 forks source link

normalization with deepfool fmodel #103

Closed jS5t3r closed 10 months ago

jS5t3r commented 11 months ago

Can I use this fmodel for preprocessing/normalization from DeepFool? (source: https://github.com/bethgelab/foolbox/blob/12abe74e2f1ec79edb759454458ad8dd9ce84939/examples/multiple_attacks_pytorch_resnet18.py#L13) So that the model the input for the model is normalized? I have seen some other threads where I have to change the model like https://github.com/fra31/auto-attack/issues/53, but I want to avoid this.

preprocessing = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], axis=-3)
fmodel = PyTorchModel(model, bounds=(0, 1), preprocessing=preprocessing)
adversary = AutoAttack_mod(fmodel, norm='Linf', eps=8/255., log_path='a.log'), version='standard')
for it, (img, lab) in enumerate(data_loader):
      print(adversary.run_standard_evaluation(x_test, y_test, bs=128)

https://colab.research.google.com/drive/1uZrW3Sg-t5k6QVEwXDdjTSxWpiwPGPm2?usp=sharing

ScarlettChan commented 11 months ago

您好,您的邮件已收到!

jS5t3r commented 10 months ago

I just add this notebook to the issues: https://colab.research.google.com/drive/1uZrW3Sg-t5k6QVEwXDdjTSxWpiwPGPm2?usp=sharing

fra31 commented 10 months ago

Hi,

as far as I understand, according to this, PyTorchModel is a wrapper which performs pre-processing and then a forward pass of the original model. It seems a more general version of the solution in https://github.com/fra31/auto-attack/issues/13, so it should be fine. It also appears to work in your example in the notebook (unless you meant to show something else with it).

jS5t3r commented 10 months ago

Ok. Thanks. Yes, I just wanted to proof it via my notebook. I am going to close this question.