Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
MIT License
4.76k stars 1.15k forks source link

Problem with PyTorchYolo.py #1796

Open yassinethr opened 2 years ago

yassinethr commented 2 years ago

Hello,

I think there is an issue with the PyTorchYolo.py :

I'm trying to run a RobustDPatch with a PyTorchYolo model :

from art.attacks.evasion import RobustDPatch
from art.estimators.object_detection import PyTorchYolo
import torch
from numpy import asarray
from PIL import Image
import requests
from io import BytesIO
import cv2
import numpy as np

response = requests.get('https://ultralytics.com/images/zidane.jpg')

img = asarray(Image.open(BytesIO(response.content)).resize((640, 640)))
img_reshape = img.reshape((3, 640, 640))

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)

config = {
            "attack_losses": ["loss_classifier", "loss_box_reg", "loss_objectness", "loss_rpn_box_reg"],
            "cuda_visible_devices": "1",
            "patch_shape": [3, 20, 20],
            "patch_location": [600, 750],
            "crop_range": [0, 0],
            "brightness_range": [1.0, 1.0],
            "rotation_weights": [1, 0, 0, 0],
            "sample_size": 1,
            "learning_rate": 1.0,
            "max_iter": 5000,
            "batch_size": 1,
            "image_file": "zidane.jpg",
            "resume": False,
            "path": "xp/",
        }

image = np.stack([img_reshape], axis=0).astype(np.float32) 
x = image.copy()

detector = PyTorchYolo(model=model, 
                       clip_values=(0, 255), 
                       attack_losses=config["attack_losses"])

attack = RobustDPatch(
    detector,
    patch_shape=config["patch_shape"],
    # patch_location=config["patch_location"],
    crop_range=config["crop_range"],
    brightness_range=config["brightness_range"],
    rotation_weights=config["rotation_weights"],
    sample_size=config["sample_size"],
    learning_rate=config["learning_rate"],
    max_iter=1,
    batch_size=config["batch_size"],
)

patch = attack.generate(x)

I am getting the following error :

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
/tmp/ipykernel_17271/994346356.py in <module>
----> 1 patch = attack.generate(x)

~/adversarial-robustness-toolbox/art/attacks/evasion/dpatch_robust.py in generate(self, x, y, **kwargs)
    213                     )
    214 
--> 215                     gradients = self.estimator.loss_gradient(
    216                         x=patched_images,
    217                         y=patch_target,

~/adversarial-robustness-toolbox/art/estimators/object_detection/pytorch_yolo.py in loss_gradient(self, x, y, **kwargs)
    350         for loss_name in self.attack_losses:
    351             if loss is None:
--> 352                 loss = output[loss_name]
    353             else:
    354                 loss = loss + output[loss_name]

IndexError: too many indices for tensor of dimension 5
​

The thing is, in the error message, output is a Tensor, not a dict. So trying to access the value of the loss_name key doesn't work ...

Could you please help on this ? Thanks in advance !

beat-buesser commented 2 years ago

Hi @yassinethr That's a good point! We need to improve the documentation of PyTorchYolo. So far we have only tested it with Yolo v3. Because of the different license of Yolo we have to ask the user at the moment to expose the loss term of the Yolo model as in this example code:

from pytorchyolo import models
from pytorchyolo.utils.loss import compute_loss

class YoloV3(torch.nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x, targets=None):
        if self.training:
            outputs = self.model(x)
            # loss is averaged over a batch. Thus, for patch generation use batch_size = 1
            loss, loss_components = compute_loss(outputs, targets, self.model)

            loss_components_dict = {}
            loss_components_dict["loss_total"] = loss
            # loss_components_dict["loss_box_reg"] = loss_components[0]
            # loss_components_dict["loss_object"] = loss_components[1]
            # loss_components_dict["loss_classification"] = loss_components[2]
            # loss_components_dict["loss_total"] = loss_components[3]

            return loss_components_dict
        else:
            return self.model(x)

model = YoloV3(model)

art_object_detector = PyTorchYolo(model=model,
                                  input_shape=(3, 416, 416),
                                  clip_values=(0, 255),
                                  attack_losses=("loss_total",)
#                                  attack_losses=("loss_box_reg",
#                                                 "loss_object",
#                                                 "loss_classification",)
#                                                 "loss_total",)
                                  )

This will expose the total loss of Yolo v3. The loss components (the currently commented lines) can be exposed separately too, but this requires removing to_cpu in line 125 of pytorchyolo/utils/loss.py.

We'll update our documentation soon and add a complete example. Please let me know if this helps?

yassinethr commented 2 years ago

Thanks for the reply and support !

Unfortunately, I'm getting a new error :

Training Step: 1
EOT Step: 1
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
/tmp/ipykernel_3946/2489895574.py in <module>
     93 
     94 
---> 95 patch = attack.generate(x)

~/adversarial-robustness-toolbox/art/attacks/evasion/dpatch_robust.py in generate(self, x, y, **kwargs)
    213                     )
    214 
--> 215                     gradients = self.estimator.loss_gradient(
    216                         x=patched_images,
    217                         y=patch_target,

~/adversarial-robustness-toolbox/art/estimators/object_detection/pytorch_yolo.py in loss_gradient(self, x, y, **kwargs)
    344         :return: Loss gradients of the same shape as `x`.
    345         """
--> 346         output, inputs_t, image_tensor_list_grad = self._get_losses(x=x, y=y)
    347 
    348         # Compute the gradient and return

~/adversarial-robustness-toolbox/art/estimators/object_detection/pytorch_yolo.py in _get_losses(self, x, y)
    324         labels_t = translate_labels_art_to_yolov3(labels_art=y_preprocessed)
    325 
--> 326         loss_components = self._model(inputs_t, labels_t)
    327 
    328         return loss_components, inputs_t, image_tensor_list_grad

~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1109                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110             return forward_call(*input, **kwargs)
   1111         # Do not call functions when jit is used
   1112         full_backward_hooks, non_full_backward_hooks = [], []

/tmp/ipykernel_3946/2489895574.py in forward(self, x, targets)
     54             outputs = self.model(x)
     55             # loss is averaged over a batch. Thus, for patch generation use batch_size = 1
---> 56             loss, loss_components = compute_loss(outputs, targets, self.model)
     57 
     58             loss_components_dict = {}

/opt/conda/lib/python3.9/site-packages/pytorchyolo/utils/loss.py in compute_loss(predictions, targets, model)
     64 
     65     # Build yolo targets
---> 66     tcls, tbox, indices, anchors = build_targets(predictions, targets, model)  # targets
     67 
     68     # Define different loss functions classification

/opt/conda/lib/python3.9/site-packages/pytorchyolo/utils/loss.py in build_targets(p, targets, model)
    136     targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2)
    137 
--> 138     for i, yolo_layer in enumerate(model.yolo_layers):
    139         # Scale anchors by the yolo grid cell size so that an anchor with the size of the cell would result in 1
    140         anchors = yolo_layer.anchors / yolo_layer.stride

~/.local/lib/python3.9/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
   1183             if name in modules:
   1184                 return modules[name]
-> 1185         raise AttributeError("'{}' object has no attribute '{}'".format(
   1186             type(self).__name__, name))
   1187 

AttributeError: 'AutoShape' object has no attribute 'yolo_layers'
​
beat-buesser commented 2 years ago

Which version of Yolo are you using above?

yassinethr commented 2 years ago

I just switched to the V3 version, now it works ! (I was still using the torch version of V5)

Thanks for the support !

beat-buesser commented 2 years ago

@yassinethr That's great! Btw, we are working on support for Yolo v5 in ART 1.12.

phanisai22 commented 2 years ago

Hi @yassinethr That's a good point! We need to improve the documentation of PyTorchYolo. So far we have only tested it with Yolo v3. Because of the different license of Yolo we have to ask the user at the moment to expose the loss term of the Yolo model as in this example code:

from pytorchyolo import models
from pytorchyolo.utils.loss import compute_loss

class YoloV3(torch.nn.Module):
    def __init__(self, model):
        super().__init__()
        self.model = model

    def forward(self, x, targets=None):
        if self.training:
            outputs = self.model(x)
            # loss is averaged over a batch. Thus, for patch generation use batch_size = 1
            loss, loss_components = compute_loss(outputs, targets, self.model)

            loss_components_dict = {}
            loss_components_dict["loss_total"] = loss
            # loss_components_dict["loss_box_reg"] = loss_components[0]
            # loss_components_dict["loss_object"] = loss_components[1]
            # loss_components_dict["loss_classification"] = loss_components[2]
            # loss_components_dict["loss_total"] = loss_components[3]

            return loss_components_dict
        else:
            return self.model(x)

model = YoloV3(model)

art_object_detector = PyTorchYolo(model=model,
                                  input_shape=(3, 416, 416),
                                  clip_values=(0, 255),
                                  attack_losses=("loss_total",)
#                                  attack_losses=("loss_box_reg",
#                                                 "loss_object",
#                                                 "loss_classification",)
#                                                 "loss_total",)
                                  )

This will expose the total loss of Yolo v3. The loss components (the currently commented lines) can be exposed separately too, but this requires removing to_cpu in line 125 of pytorchyolo/utils/loss.py.

We'll update our documentation soon and add a complete example. Please let me know if this helps?

Hello @beat-buesser, I have been trying to run PGD attack on yolov3 and facing several issues, can u provide an example for this? I cannot find the documentation or example for yolov3.