bethgelab / foolbox

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
https://foolbox.jonasrauber.de
MIT License
2.74k stars 425 forks source link

Error in batch_predictions of BoundaryAttack #272

Closed VigneshSrinivasan10 closed 5 years ago

VigneshSrinivasan10 commented 5 years ago

Hi,

This is literally the first day I am using foolbox to validate Decision based attacks on my defense model.

I have a very simple MNIST model for which I am using the Boundary Attack.

num_batches = 10000 
batch_size=1 

x_input = tf.placeholder(tf.float32, shape=(batch_size, 28, 28, 1))
logits = classifier(x_input)
with foolbox.models.TensorFlowModel(x_input, logits, bounds=(0, 1)) as model:
  reload_model(model.session)

  for ibatch in range(num_batches):
    bstart = ibatch * batch_size
    bend = min(bstart + batch_size, 10000)
    x_batch = mnist.test.images[bstart:bend, :].reshape([28, 28,1])
    y_batch = mnist.test.labels[bstart:bend][0]
    preds = model.predictions(x_batch)
    print(np.argmax(preds), y_batch)

    # Find adversarial example
    attack  = foolbox.attacks.BoundaryAttack(model)
    adversarial = attack(x_batch, y_batch, verbose=True)
    adv_preds = model.predictions(adversarial)
    print(np.argmax(adv_preds), y_batch)

The code executes the normal image successfully and executes the first print statement 7 7 Followed by the error statement while executing the boundary attack as given below:

Neither starting_point nor initialization_attack given. Falling back to BlendedUniformNoiseAttack for initialization.
Initial spherical_step = 0.01, source_step = 0.01
Using 4 threads to create random numbers
Step 0: 8.15865e-02, stepsizes = 1.0e-02/1.0e-02: 
Step 1: 8.15865e-02, stepsizes = 1.0e-02/1.0e-02:  (took 0.05031 seconds)
... 
... to be concise... 
...
Step 80: 7.33123e-02, stepsizes = 1.5e-02/4.4e-03:  (took 0.06256 seconds)
Step 81: 7.26621e-02, stepsizes = 1.5e-02/4.4e-03: d. reduced by 0.89% (6.5022e-04) (took 0.02302 seconds)
Step 82: 7.26621e-02, stepsizes = 1.5e-02/4.4e-03:  (took 0.03983 seconds)
Step 83: 7.20177e-02, stepsizes = 1.5e-02/4.4e-03: d. reduced by 0.89% (6.4445e-04) (took 0.00670 seconds)
Step 84: 7.20177e-02, stepsizes = 1.5e-02/4.4e-03:  (took 0.03902 seconds)
Step 85: 7.13789e-02, stepsizes = 1.5e-02/4.4e-03: d. reduced by 0.89% (6.3873e-04) (took 0.03290 seconds)
Step 86: 7.07459e-02, stepsizes = 1.5e-02/4.4e-03: d. reduced by 0.89% (6.3307e-04) (took 0.01331 seconds)
Step 87: 7.07459e-02, stepsizes = 1.5e-02/4.4e-03:  (took 0.04007 seconds)
Step 88: 7.07459e-02, stepsizes = 1.5e-02/4.4e-03:  (took 0.04145 seconds)
Step 89: 7.01184e-02, stepsizes = 1.5e-02/4.4e-03: d. reduced by 0.89% (6.2745e-04) (took 0.00544 seconds)
  Boundary too linear, increasing steps:     0.59 (100), 0.03 (30)
  Success rate too low, decreasing source step:  0.59 (100), 0.03 (30)
Step 90: 7.01184e-02, stepsizes = 2.2e-02/4.4e-03:  (took 0.08349 seconds)
Step 91: 6.94965e-02, stepsizes = 2.2e-02/4.4e-03: d. reduced by 0.89% (6.2189e-04) (took 0.00353 seconds)
Step 92: 6.88802e-02, stepsizes = 2.2e-02/4.4e-03: d. reduced by 0.89% (6.1637e-04) (took 0.02869 seconds)
Step 93: 6.82692e-02, stepsizes = 2.2e-02/4.4e-03: d. reduced by 0.89% (6.1091e-04) (took 0.00196 seconds)
Step 94: 6.76638e-02, stepsizes = 2.2e-02/4.4e-03: d. reduced by 0.89% (6.0549e-04) (took 0.02187 seconds)
Step 95: 6.70636e-02, stepsizes = 2.2e-02/4.4e-03: d. reduced by 0.89% (6.0012e-04) (took 0.00384 seconds)
Step 96: 6.64688e-02, stepsizes = 2.2e-02/4.4e-03: d. reduced by 0.89% (5.9480e-04) (took 0.00707 seconds)
Step 97: 6.58793e-02, stepsizes = 2.2e-02/4.4e-03: d. reduced by 0.89% (5.8952e-04) (took 0.00663 seconds)
Step 98: 6.52950e-02, stepsizes = 2.2e-02/4.4e-03: d. reduced by 0.89% (5.8429e-04) (took 0.03459 seconds)
Step 99: 6.47159e-02, stepsizes = 2.2e-02/4.4e-03: d. reduced by 0.89% (5.7911e-04) (took 0.03491 seconds)
Step 100: 6.41419e-02, stepsizes = 2.2e-02/4.4e-03: d. reduced by 0.89% (5.7397e-04) (took 0.01940 seconds)
Initializing generation and prediction time measurements. This can take a few seconds.
    adversarial = attack(x_batch, y_batch, verbose=True)
  File "foolbox/attacks/base.py", line 137, in wrapper
    _ = call_fn(self, a, label=None, unpack=None, **kwargs)
  File "foolbox/attacks/boundary_attack.py", line 155, in __call__
    threaded_gen=threaded_gen)
  File "foolbox/attacks/boundary_attack.py", line 173, in _apply_outer
    return self._apply_inner(pool, *args, **kwargs)
  File "foolbox/attacks/boundary_attack.py", line 372, in _apply_inner
    a, pool, external_dtype, generation_args)
  File "foolbox/attacks/boundary_attack.py", line 935, in initialize_stats
    strict=False, return_details=True)
  File "foolbox/adversarial.py", line 334, in batch_predictions
    predictions = self.__model.batch_predictions(images)
  File "foolbox/models/tensorflow.py", line 138, in batch_predictions
    feed_dict={self._images: images})
  File "tensorflow/python/client/session.py", line 877, in run
    run_metadata_ptr)
  File "tensorflow/python/client/session.py", line 1076, in _run
    str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (2, 28, 28, 1) for Tensor 'Placeholder:0', which has shape '(1, 28, 28, 1)'

I dont understand why Boundary Attack is making use of batches, when I have passed only one image. If running several candidates at a time makes it an efficient code - why does mode.batch_predictions throw an error ? Any pointers on how to fix this issue and be able to run this attack would be very helpful.

Thanks in advance!

wielandbrendel commented 5 years ago

Your model only takes inputs of batch size 1. Easy solution: call the Boundary attack with the option tune_batch_size=False.

VigneshSrinivasan10 commented 5 years ago

Worked perfectly :) Thanks.

jonasrauber commented 5 years ago

Alternatively, change your model to accept batches. Then the boundary attack can use batches internally and therefore will be faster.

VigneshSrinivasan10 commented 5 years ago

@jonasrauber : thank you for following it up.

I tried to feed in more than one image by giving batch_size=2 But, I get the error as given below, while defining the foolbox model at this line: with foolbox.models.TensorFlowModel(x_input, logits, bounds=(0, 1)) as model:

The error message is as shown below:

ValueError: Can not squeeze dim[0], expected a dimension of 1, got 2 for 'Squeeze' (op: 'Squeeze') with input shapes: [2,10].

It would be great I can give batches at once. It would definitely speed up my pipeline

Thanks in advance,

jonasrauber commented 5 years ago

No, you shouldn't feed in more than one image in the attack. You should change your model to accept an arbitrary number of images as normally done (specifying None as batch size). Then the BoundaryAttack can make use of that because it needs to test many different images to create a singel adversarial for a single image.

VigneshSrinivasan10 commented 5 years ago

I see. Thanks for the clarification.