Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
MIT License
4.74k stars 1.15k forks source link

Defense Algorithm #25

Closed akshayag closed 5 years ago

akshayag commented 5 years ago

Hi, I just want to know how to use the defense algorithms inbuilt in the toolbox. For example, I want to use the Feature Squeezing. Is the following line is correct?

adv_smoothing = art.defences.FeatureSqueezing(x_test_adv) # passing the adversarial images generated

Thanks in advance

ririnicolae commented 5 years ago

Hi @akshayag! You first need to create an instance of that class, then call it with the adversarial samples. With your notation:

from art.defences import FeatureSqueezing
feat_sqz = FeatureSqueezing()
adv_smoothing = feat_sqz(x_test_adv)

You can also pass the parameter bit_depth to the FS constructor to control the number of bits that will be used to encode the data. Alternatively, if you are using a Classifier wrapper (any of KerasClassifier, TFClassifier, MXClassifier, PyTorchClassifier), you do not need to manually create a FS instance as done previously. You can just add the defences='featsqueeze<x>' to the wrapper constructor, where <x> is an integer value representing the bit depth. Here is an example for creating a Keras classifier with FS:

This will ensure that feature squeezing is applied each time before doing prediction.
cls = KerasClassifier((0, 1), model, defences='featsqueeze5')