GiorgosKarantonis / Adversarial-Attacks-with-Relativistic-AdvGAN

Using relativism to improve GAN-based Adversarial Attacks. 🦾
GNU General Public License v3.0
38 stars 7 forks source link

the attrack accuracy of baseline #4

Closed comea23 closed 4 years ago

comea23 commented 4 years ago

Hello, I ran you code and got a result, but there is a big difference between the result and the accuracy in readme.My target model is vgg13 ,the accuracy in testset is 88% and 95% in trainset.The result of your code is 59% in trainset and 23% in testset.The dataset is cifar-10 and batchsize is 800.Could you tell me your parameter of your target model? Thank you.

GiorgosKarantonis commented 4 years ago

Hey and sorry for the late reply but for some reason I just saw your issue... For cifar-10 the target is the resnet32 and I used a batch size of 400 (https://github.com/GiorgosKarantonis/Adversarial-Attacks-with-Relativistic-AdvGAN/blob/master/src/main.py#L91-L104).

Can you give more context about how you get the scores you mention? Is the first pair the accuracy on real data and the second one the accuracy when you perform an attack?

Finally, keep in mind that the scores I report on the table are for the models of the MadryLab Challenge, which use several defenses against adversarial attacks; in models that don't have any defenses the success rate is way higher (= lower target accuracy).

GiorgosKarantonis commented 4 years ago

I'm closing the issue for now but feel free to reopen it if you want.