Closed Changgun-Choi closed 2 years ago
Hi,
preprocess
argument in foolbox's model classes.Question1 I am curious the order of normalization and bounds. Which is done first? As the images are [0,1] at the end , I assume that 1) normalize 2) bounds.
Question 2. Using Custom dataset (related to question 1)
Since foobox has maximum 20 images, accuracy out of 20 would not be significant. Therefore, I would like to use custom dataset by using transform = transforms.Compose. transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
]).
However, it preprocess into [0,1] first and then normalize them so it is not in bound [0,1] which is opposite in foolbox preprocess. It would be helpful for the explanation.
Question3. Clipped_advs is not having value between 0 - 1 Should original_advs, and clipped_advs both in range [0,1]? I expected clipped_advs are doing such as torch.clamp(original_advs, 0, 1)?
preprocessing = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], axis=-3) fmodel = PyTorchModel(model, bounds=(0, 1), preprocessing=preprocessing) (https://user-images.githubusercontent.com/72928938/161386036-9c4e31f7-df6f-441b-8572-f429e2f0017a.png)
You can preprocess data yourself or have Foolbox's model do it. It's a choice.
In any case, you should provide the correct bounds to Foolbox's model.
clipped_advs
are not clipped to bounds
, but to epsilons
. This is confusing and needs better documentation. Someday soon I will surely get around to it. 😉
Here are a few points in addition to the previous answer:
The bounds are the bounds used internally by foolbox to search for the adversarial examples. The attacks expect the input to also lie in these bounds. Before passing a sample to the model, foolbox internally applies the preprocessing set for the model. So putting it in your words: first the bounds and then the preprocessing is "done first".
The samples
method of foolbox only returns a handful of images. These images are also not meant to test your model on! They are just meant as a means to quickly test that the installation worked and/or to play around with the package before actually attacking a model;)
The clipped adversarial examples should actually be clipped to both the epsilon radius as well as the model's bounds. For which attacks did you see a different behavior @jangop @Changgun-Choi?
3. The clipped adversarial examples should actually be clipped to both the epsilon radius as well as the model's bounds. For which attacks did you see a different behavior @jangop
At some point in the past, I wanted to figure out the actual difference between adversarials
and clipped_adversarials
. I remember following the code and realizing that bounds
were ignored. Was I mistaken?
Specifically I found that, if epsilons=None
, then adversarials == clipped_adversarials
.
Before being returned, xpc
is passed through restore_type
, but I don't think that has access to any bounds
.
Hello all, Thank you for the answer!
Question 1-2 About the answer, so foolbox preprocess input by lying in bounds[0.1] and then normalize it? Then the values would be out of the bound as what I did by manually 'transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])' ?
Then, how could we preprocess input manually exactly the same as I intended with 'foolbox preprocess'?
Question 3. I think 'clean_accuracy' : ((predictions == labels).float32().mean()) is different accuracy with 'Robust accuracy' : robust_accuracy = 1 - success.float32().mean(axis=-1) as it means 'success of attacks'. Therefore when we want to know accuracy of the model, we should not use 'robust accuracy' or am I misunderstanding it?
Thank you for help!!
Q. The issue while using custom dataset.
I am using simple dataset, 'Cifar10' which has 10 labels however, output of model(images) has 1000 dimensions and as a result, prediction has 1000 labels that do not match.
Please let me know about this issue! Thank you again.
As I know, we should not normalize before Attacks. Does Foolbox also follow this principle? 1. Foolbox explanation says: bounds [0,1] -> preprocessing(normalization) However, the image tensor is [0,1] which doesn’t match with an explanation.
2. Using 'transforms.ToTensor()' already makes in the [0, 1] value range. So we don't need to normalize in this case?
ex. transform = transforms.Compose([ transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ])
ex. Clipped_Adv: [ 0.5029, 0.4851, 0.0167, ..., -1.1999, -1.1302, -0.9559]