hendrycks / ss-ood

Self-Supervised Learning for OOD Detection (NeurIPS 2019)
MIT License
263 stars 31 forks source link

Question for Adversarial Attack Implementation #21

Closed wjun0830 closed 3 years ago

wjun0830 commented 3 years ago

Hi Hendrycks. Its pleasure to review your works in ood. Meanwhile, I have a question about the adversarial attack.

adversary = attacks.PGD(epsilon=8./255, num_steps=20, step_size=2./255).cuda()

This code in line 125 in ss-ood/adversarial/train.py seems to train the 20-step PGD for adversarial training+Auxiliary Rotations.(I changed the num_steps to 20 to make your reported setting.)

However, the training log seems something weird.

Epoch  57 | Time   607 | Train Loss 1.6344 | Test Loss 0.843 | Test Error 27.96
Epoch  58 | Time   605 | Train Loss 1.6214 | Test Loss 0.829 | Test Error 26.39
Epoch  59 | Time   594 | Train Loss 1.5835 | Test Loss 0.808 | Test Error 25.44
Epoch  60 | Time   591 | Train Loss 1.6008 | Test Loss 0.802 | Test Error 24.35
Epoch  61 | Time   602 | Train Loss 1.6263 | Test Loss 0.796 | Test Error 26.33
Epoch  62 | Time   611 | Train Loss 1.5923 | Test Loss 0.790 | Test Error 24.62
Epoch  63 | Time   597 | Train Loss 1.5906 | Test Loss 0.783 | Test Error 25.35
Epoch  64 | Time   607 | Train Loss 1.6116 | Test Loss 0.811 | Test Error 25.18

Your reported accuracy is 50.4% for 20-step PGD but the test error is too low and it seems to be similar to Clean. Can you please provide how you ran the code for generating 20-step PGD and 100-step PGD?

Thanks for your great work!

hendrycks commented 3 years ago

That is displaying the clean error.

On Tue, May 18, 2021 at 10:44 AM wjun0830 @.***> wrote:

Hi Hendrycks. Its pleasure to review your works in ood. Meanwhile, I have a question about the adversarial attack.

adversary = attacks.PGD(epsilon=8./255, num_steps=20, step_size=2./255).cuda()

This code in line 125 in ss-ood/adversarial/train.py seems to train the 20-step PGD for adversarial training+Auxiliary Rotations.(I changed the num_steps to 20 to make your reported setting.)

However, the training log seems something weird.

Epoch 57 | Time 607 | Train Loss 1.6344 | Test Loss 0.843 | Test Error 27.96 Epoch 58 | Time 605 | Train Loss 1.6214 | Test Loss 0.829 | Test Error 26.39 Epoch 59 | Time 594 | Train Loss 1.5835 | Test Loss 0.808 | Test Error 25.44 Epoch 60 | Time 591 | Train Loss 1.6008 | Test Loss 0.802 | Test Error 24.35 Epoch 61 | Time 602 | Train Loss 1.6263 | Test Loss 0.796 | Test Error 26.33 Epoch 62 | Time 611 | Train Loss 1.5923 | Test Loss 0.790 | Test Error 24.62 Epoch 63 | Time 597 | Train Loss 1.5906 | Test Loss 0.783 | Test Error 25.35 Epoch 64 | Time 607 | Train Loss 1.6116 | Test Loss 0.811 | Test Error 25.18

Your reported accuracy is 50.4% for 20-step PGD but the test error is too low and it seems to be similar to Clean. Can you please provide how you ran the code for generating 20-step PGD and 100-step PGD?

Thanks for your great work!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/hendrycks/ss-ood/issues/21, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACZBITSCD6G5NW4VORMD57TTOKRQXANCNFSM45C7LVUA .

wjun0830 commented 3 years ago

Yeah That is for sure haha. But I think below is setting parameter for 20 step pgd. Isn't it?

adversary = attacks.PGD(epsilon=8./255, num_steps=20, step_size=2./255).cuda()

If not, may I ask you how to run the 20-step PGD? Thanks!

That is displaying the clean error. On Tue, May 18, 2021 at 10:44 AM wjun0830 @.***> wrote: Hi Hendrycks. Its pleasure to review your works in ood. Meanwhile, I have a question about the adversarial attack. adversary = attacks.PGD(epsilon=8./255, num_steps=20, step_size=2./255).cuda() This code in line 125 in ss-ood/adversarial/train.py seems to train the 20-step PGD for adversarial training+Auxiliary Rotations.(I changed the num_steps to 20 to make your reported setting.) However, the training log seems something weird. Epoch 57 | Time 607 | Train Loss 1.6344 | Test Loss 0.843 | Test Error 27.96 Epoch 58 | Time 605 | Train Loss 1.6214 | Test Loss 0.829 | Test Error 26.39 Epoch 59 | Time 594 | Train Loss 1.5835 | Test Loss 0.808 | Test Error 25.44 Epoch 60 | Time 591 | Train Loss 1.6008 | Test Loss 0.802 | Test Error 24.35 Epoch 61 | Time 602 | Train Loss 1.6263 | Test Loss 0.796 | Test Error 26.33 Epoch 62 | Time 611 | Train Loss 1.5923 | Test Loss 0.790 | Test Error 24.62 Epoch 63 | Time 597 | Train Loss 1.5906 | Test Loss 0.783 | Test Error 25.35 Epoch 64 | Time 607 | Train Loss 1.6116 | Test Loss 0.811 | Test Error 25.18 Your reported accuracy is 50.4% for 20-step PGD but the test error is too low and it seems to be similar to Clean. Can you please provide how you ran the code for generating 20-step PGD and 100-step PGD? Thanks for your great work! — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#21>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACZBITSCD6G5NW4VORMD57TTOKRQXANCNFSM45C7LVUA .

hendrycks commented 3 years ago

Try something like https://github.com/hendrycks/pre-training/blob/master/robustness/adversarial/test.py (it might require minimal modifications)

wjun0830 commented 3 years ago

Oh Hendrycks. I found something weird in the paper. Your table and the code says the parameters(stepsize, episilon) are divided by 255 but in the setup, you mention the step size alpha = 2/256, 0.3/256.

Shouldn't it be 2/255, 0.3/255?

hendrycks commented 3 years ago

Yes, it should be 255.