Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
MIT License
4.82k stars 1.16k forks source link

in-place gradient runtime errors for object detection attack examples #1136

Closed aaronrmm closed 3 years ago

aaronrmm commented 3 years ago

Describe the bug I am getting a pytorch error when running both object_detection examples in examples/

RuntimeError: A view was created in no_grad mode and is being modified inplace with grad mode enabled. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.

To Reproduce Steps to reproduce the behavior:

  1. Follow this link to my colab notebook where I have copied the code and where I get the above error: https://colab.research.google.com/drive/1mxZHqEHQFAgwBPxDX500lxkEiIDZy9Sc?usp=sharing
  2. See error
  3. Optionally run and then read error

System information (please complete the following information):

Stack Trace

RuntimeError                              Traceback (most recent call last)

<ipython-input-18-8774c755a087> in <module>()
     68 for i in range(config["max_iter"]):
     69     print("Iteration:", i)
---> 70     patch = attack.generate(x)
     71     x_patch = attack.apply_patch(x)
     72 

7 frames

/usr/local/lib/python3.7/dist-packages/art/attacks/attack.py in replacement_function(self, *args, **kwargs)
     73                 if len(args) > 0:
     74                     args = tuple(lst)
---> 75                 return fdict[func_name](self, *args, **kwargs)
     76 
     77             replacement_function.__doc__ = fdict[func_name].__doc__

/usr/local/lib/python3.7/dist-packages/art/attacks/evasion/dpatch_robust.py in generate(self, x, y, **kwargs)
    203                         x=patched_images,
    204                         y=patch_target,
--> 205                         standardise_output=True,
    206                     )
    207 

/usr/local/lib/python3.7/dist-packages/art/estimators/object_detection/pytorch_faster_rcnn.py in loss_gradient(self, x, y, **kwargs)
    245             labels_t = y_preprocessed  # type: ignore
    246 
--> 247         output = self._model(inputs_t, labels_t)
    248 
    249         # Compute the gradient and return

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

/usr/local/lib/python3.7/dist-packages/torchvision/models/detection/generalized_rcnn.py in forward(self, images, targets)
     76             original_image_sizes.append((val[0], val[1]))
     77 
---> 78         images, targets = self.transform(images, targets)
     79 
     80         # Check for degenerate boxes

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
    887             result = self._slow_forward(*input, **kwargs)
    888         else:
--> 889             result = self.forward(*input, **kwargs)
    890         for hook in itertools.chain(
    891                 _global_forward_hooks.values(),

/usr/local/lib/python3.7/dist-packages/torchvision/models/detection/transform.py in forward(self, images, targets)
    108 
    109         image_sizes = [img.shape[-2:] for img in images]
--> 110         images = self.batch_images(images)
    111         image_sizes_list: List[Tuple[int, int]] = []
    112         for image_size in image_sizes:

/usr/local/lib/python3.7/dist-packages/torchvision/models/detection/transform.py in batch_images(self, images, size_divisible)
    213         batched_imgs = images[0].new_full(batch_shape, 0)
    214         for img, pad_img in zip(images, batched_imgs):
--> 215             pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
    216 
    217         return batched_imgs
beat-buesser commented 3 years ago

Hi @aaronrmm Thank you very much for using ART! This issue seems related to #1123 and the most recent version of PyTorch. Please try with torch==1.6.0 and torchvision=0.7.0.

beat-buesser commented 3 years ago

After taking a closer look and discussing with @lcadalzo it looks like this RunTimeError appears for batch sizes larger than 1, but not for single images.