Closed Freed-Wu closed 1 year ago
The loss function of each method is a key part of the attack. From this point of view, for me, a customizable loss function for all attacks is not feasible right now. However, I agree that customizable loss can improve the usefulness of the package. How about making a new attack class that can change the loss based on PGD?
How about making a new attack class that can change the loss based on PGD?
Great!
Like
class PGD(Attack):
def forward(self, images, labels, loss = None):
r"""
Overridden.
"""
images = images.clone().detach().to(self.device)
labels = labels.clone().detach().to(self.device)
if self.targeted:
target_labels = self.get_target_label(images, labels)
if loss is None:
loss = nn.CrossEntropyLoss()
User can customize loss, and it will not break the compatibility.
Perhaps optimizer can also be customized. And how about provide a hook function to allow user record some data to tensorboard?
a hook function to allow user record some data
I think this is going to be pretty useful. I also opened an issue (#130 ) discussing the output format, such that we can have more information collected at the end.
Perhaps optimizer can also be customized
I am not sure that's how it works. Essentially, for the gradient-based attacks, like BIM, FGSM, PGD, they are just the optimizers themselves. From a naïve behavioural perspective, optimizer (in PyTorch) is just taking your gradients (maybe some previous states as well) and tell you the step size to go.
Here is a modified version of UPGD following your pull record: https://github.com/Harry24k/adversarial-attacks-pytorch/commit/100047f5f3e41b339fa8c20c2b6d311ded1dd909.
For optimizer customizing, it's quite difficult to modify all attacks to attain a customizable optimizer. I will remain this as future work.
By default,
Can it be changed to other function to make this package more feasible and customizable?