Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
MIT License
4.64k stars 1.13k forks source link

Poison Detection - Activation Defense with Keras Generator #515

Open e117777 opened 3 years ago

e117777 commented 3 years ago

Describe the bug If a batch of a generator doesn't contain all classes the method segmenation of classes returns an empty list for the class and the line 671 cannot get the amount of values inside the class (nb_activations = np.shape(activation)[1])

To Reproduce

  1. fit a model with keras/tensorflow
  2. use a small enough batch size

Expected behavior I guess it ignores this class for this batch (not sure though).

Screenshots If applicable, add screenshots to help explain your problem.

System information (please complete the following information):

ebubae commented 3 years ago

Thanks for using ART!

Could you show a code snippet of the issue so I can reproduce? Also what version of ART are you using. The data generator version of Activation Defense has been added recently and changes have been made. Are you using v1.3.1?

e117777 commented 3 years ago
model = load_model(library.MODEL_LOAD_PATH)

train_data_gen = create_generator(train_dataframe,
                                  batch_size=16)

classifier = TensorFlowV2Classifier(
    model=model,
    nb_classes=library.NUM_CLASSES,
    input_shape=model.input_shape,
)

data_gen = KerasDataGenerator(train_data_gen, size=train_data_gen.samples, batch_size=train_data_gen.batch_size)
defense = ActivationDefence(classifier=classifier, x_train=None, y_train=None, generator=data_gen)
defense.detect_poison()

FYI: train_dataframe is a dataframe containing filepaths and labels, I have 15 classes (YALE FACES) and have poisoned the dataset. The model is a TensorFlowV2 Model, but the Generator is a tf.keras.preprocessing.image.DataGenerator.

In this snippet batch size is 16, but due to shuffling I cannot assume that in each batch every class is present. If I change the batchsize to the size of my dataframe I am able to detect poisoning.

I used adversarial-robustness-toolbox==1.3.1

beat-buesser commented 3 years ago

Hi @ebubae Have you been able to reproduce this issue?