Open michellerosehb opened 9 months ago
Hi @michellerosehb , I simply generated a test and found that there should be no problems with FGSM.
I guess there is something wrong with part of your code.
By the way, this code you provided is very difficult to read π.
I encounter a similar problem when using PGDL2 with eps=0. The problem doesn't occur always but appears after some epochs of training.
For me exactly, it is causing adv_images to take nan
values.
I encounter a similar problem when using PGDL2 with eps=0. The problem doesn't occur always but appears after some epochs of training.
For me exactly, it is causing adv_images to take
nan
values.
Forgive me for asking a question first, but why use an eps=0 attack? This represents the original image in a mathematical sense, so why not just use the original image? π
Then about PGDL2, according to https://github.com/Harry24k/adversarial-attacks-pytorch/issues/161, there is a problem with the PGDL2 algorithm that he is still trying to fix. π Perhaps you could wait until he's finished fixing it before trying the new code, or you could provide a copy of the code that can be used to reproduce the problem to help me find out what happened.
why use an eps=0 attack?
To check the correctness of the algorithm. Under eps=0, it should have the same accuracy as normal training.
Thanks for linking the issue. I will take a look. Even though I believe it should just sample 0 vector when eps=0.
why use an eps=0 attack?
To check the correctness of the algorithm. Under eps=0, it should have the same accuracy as normal training.
Thanks for linking the issue. I will take a look. Even though I believe it should just sample 0 vector when eps=0.
So far under my local testing (200 test images from CIFAR10), the PGDL2
algorithm does not have an abnormal attack success rate with eps=0
. In fact, you can see from the MSE
losses for the adv. example and the original example that they are both actually the same.
Regarding adv_images=nan during training, can you provide a copy of the code available to run that I can use for testing? I'm not sure right now if it's a problem with the training code or the torchattacks.
I just ran a test on CIFAR10 using resnet18 and not once during the epochs from 0 to 100 rounds did PGDL2's accuracy exceed the anomaly. The reason it is not 0
is because there are misclassified samples in it.
β Any questions
torchattacks.FGSM(model, eps=0) seems to perturb my data, even though eps = 0. When testing my original data, I get an accuracy of 81.2%. Once I get predictions when performing an 'attack' without eps, my accuracy goes down to 55%. Code attached below. I double checked whether it was the torchattacks.FGSM(model, eps=0) that was wrong, by commenting it out and just re-checking with the original input data: my accuracy then was 100% again. Also, when trying to de-normalize the output image after torchattacks.FGSM(model, eps=0), I obtain an incorrect image which implies that my mean and std of the image have been changed: this implies that the image has been altered.
Which value of eps does not give any perturbation to my image?
def test_FGSM(epoch, model): """ Train the classifier and calculate loss """ train_loss, correct, total = 0,0,0 counter = 0 img_label_adv_advlabel = [] start_time = time.time() model.eval()