Harry24k / adversarial-attacks-pytorch

PyTorch implementation of adversarial attacks [torchattacks].
https://adversarial-attacks-pytorch.readthedocs.io/en/latest/index.html
MIT License
1.79k stars 338 forks source link

[feature] modify loss #86

Closed Freed-Wu closed 1 year ago

Freed-Wu commented 1 year ago

By default,

        loss = nn.CrossEntropyLoss()

Can it be changed to other function to make this package more feasible and customizable?

Harry24k commented 1 year ago

The loss function of each method is a key part of the attack. From this point of view, for me, a customizable loss function for all attacks is not feasible right now. However, I agree that customizable loss can improve the usefulness of the package. How about making a new attack class that can change the loss based on PGD?

Freed-Wu commented 1 year ago

How about making a new attack class that can change the loss based on PGD?

Great!

Freed-Wu commented 1 year ago

Like

class PGD(Attack):
    def forward(self, images, labels, loss = None):
        r"""
        Overridden.
        """
        images = images.clone().detach().to(self.device)
        labels = labels.clone().detach().to(self.device)

        if self.targeted:
            target_labels = self.get_target_label(images, labels)

        if loss is None:
            loss = nn.CrossEntropyLoss()

User can customize loss, and it will not break the compatibility.

Freed-Wu commented 1 year ago

Perhaps optimizer can also be customized. And how about provide a hook function to allow user record some data to tensorboard?

cestwc commented 1 year ago

a hook function to allow user record some data

I think this is going to be pretty useful. I also opened an issue (#130 ) discussing the output format, such that we can have more information collected at the end.

Perhaps optimizer can also be customized

I am not sure that's how it works. Essentially, for the gradient-based attacks, like BIM, FGSM, PGD, they are just the optimizers themselves. From a naïve behavioural perspective, optimizer (in PyTorch) is just taking your gradients (maybe some previous states as well) and tell you the step size to go.

Harry24k commented 1 year ago

Here is a modified version of UPGD following your pull record: https://github.com/Harry24k/adversarial-attacks-pytorch/commit/100047f5f3e41b339fa8c20c2b6d311ded1dd909.

Harry24k commented 1 year ago

For optimizer customizing, it's quite difficult to modify all attacks to attain a customizable optimizer. I will remain this as future work.