bethgelab / foolbox

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
https://foolbox.jonasrauber.de
MIT License
2.77k stars 427 forks source link

Training Samples #81

Closed pGit1 closed 6 years ago

pGit1 commented 6 years ago

How create training samples from adversarially perturbed original training sampels?

To be simple, suppose I had 100 training images and wanted to use Deep Fool and FGSM to perturb these samples, I should now end up with 200 adversarial samples and 100 originals to train on. How to go about this in the most efficient way with this library?

Sample code very much appreciated! :D

jonasrauber commented 6 years ago

Have a look at the sample code in the README. In the case of 100 images and fast attacks like FGSM and DeepFool, basically all you need is to put a loop around the last line:

import foolbox
import keras
import numpy as np
from keras.applications.resnet50 import ResNet50

# instantiate model
keras.backend.set_learning_phase(0)
kmodel = ResNet50(weights='imagenet')
preprocessing = (np.array([104, 116, 123]), 1)
fmodel = foolbox.models.KerasModel(kmodel, bounds=(0, 255), preprocessing=preprocessing)

# get source image and label
image, label = foolbox.utils.imagenet_example()

# apply attack on source image
attack = foolbox.attacks.FGSM(fmodel)

fgsm_adversarials = []
for image, label in zip(some_training_images, corresponding_labels):
    adversarial = attack(image[:,:,::-1], label)
    fgsm_adversarials.append(adversarial)

If you want to do this for a much larger set of images or on the fly during training, Foolbox might not be the right tool – performance is not it's focus. The power of Foolbox is a large set of attacks that makes it easy to reliably test the robustness of models.

pGit1 commented 6 years ago

Makes sense. Thanks for FAST response. Once Foolbox exposes fagility of my models which I expect it (this is a FANTASTIC tool) I want to re-train on adversarial samples and re-test.

Also what is this code doing image[:,:,::-1] ? Why do we need to go in reverse order on the channel axis of the image? Trying to figure out intuition.

Thanks again for your help!!

jonasrauber commented 6 years ago

That's just part of the preprocessing expected by the ResNet implementation in Keras, i.e. it expects BGR color channel ordering and channel mean subtraction (done a few lines before that).

jonasrauber commented 6 years ago

@pGit1 can this be closed?

pGit1 commented 6 years ago

Absolutely!! Thank you!!!

pGit1 commented 6 years ago

@jonasrauber

Can foolbox be used on a model that I trained on a different domain than imagenet? Will the Keras attack take as input any Keras mode I build?

jonasrauber commented 6 years ago

@pGit1 Foolbox can be used to attack any machine learning model, nothing is specific to ImageNet. The foolbox.models.KerasModel model wrapper for Keras models should be able to handle any Keras model that follows the conventions of the keras.models.Model class.

pGit1 commented 6 years ago

Awesome! Thank you so much!

On Tue, Mar 13, 2018 at 3:00 PM, Jonas Rauber notifications@github.com wrote:

@pGit1 https://github.com/pgit1 Foolbox can be used to attack any machine learning model, nothing is specific to ImageNet. The foolbox.models.KerasModel model wrapper for Keras models should be able to handle any Keras model that follows the conventions of the keras.models.Model class https://keras.io/models/model/#model-class-api.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/bethgelab/foolbox/issues/81#issuecomment-372781189, or mute the thread https://github.com/notifications/unsubscribe-auth/ANU-SoVTOadDQJsphn95ZGOvqMC8z5R3ks5teBdGgaJpZM4Qyfed .