I have a custom PyTorch model (derived from nn.Module) and a batch of MNIST images (obtained via torch.utils.data.DataLoader) with which I want to perform an FGMS attack as follows:
a = Adversarial(model, criterion, input_or_adv, label, distance=distance, threshold=threshold) in base.py since I passed ndarrays
self.predictions(original_image) in adversarial.py
predictions = self.__model.predictions(image) in adversarial.py
return np.squeeze(self.batch_predictions(image[np.newaxis]), axis=0) in models/base.py. At this point image has the new shape (1, batch_size, 1, 28, 28)
predictions = self._model(images) in pytorch.py which obviously fails since the model expects the image to be of shape (batch_size, 1, 28, 28)
I'm relatively new to foolbox so I don't know how to fix this, but it definitely seems wrong. An obvious solution would be to remove [np.newaxis] but then the attack for single-batch images would probably fail.
You should pass a single image and label, not a batch. Batch support will come with Foolbox 2.0. You can already try a prototype by using this PR: https://github.com/bethgelab/foolbox/pull/295
I have a custom PyTorch model (derived from nn.Module) and a batch of MNIST images (obtained via torch.utils.data.DataLoader) with which I want to perform an FGMS attack as follows:
where the shape of images is
(batch_size, 1, 28, 28)
.Following the call graph:
perturbed_images = attack(images.numpy(), predicted_labels.numpy())
a = Adversarial(model, criterion, input_or_adv, label, distance=distance, threshold=threshold)
inbase.py
since I passed ndarraysself.predictions(original_image)
inadversarial.py
predictions = self.__model.predictions(image)
inadversarial.py
return np.squeeze(self.batch_predictions(image[np.newaxis]), axis=0)
inmodels/base.py
. At this point image has the new shape(1, batch_size, 1, 28, 28)
predictions = self._model(images)
inpytorch.py
which obviously fails since the model expects the image to be of shape(batch_size, 1, 28, 28)
I'm relatively new to foolbox so I don't know how to fix this, but it definitely seems wrong. An obvious solution would be to remove
[np.newaxis]
but then the attack for single-batch images would probably fail.