Caryox / adversarial-robustness

Robustness of Adversial Neural Networks
3 stars 0 forks source link

Implement and test the foolbox attacks #8

Closed daniel-knape closed 2 years ago

daniel-knape commented 2 years ago

For Example:

Just test some attacks

Test if the implementation works and document how it works. https://foolbox.readthedocs.io/en/stable/index.html https://foolbox.readthedocs.io/en/v2.3.0/user/examples.html

daniel-knape commented 2 years ago

13h estimated

daniel-knape commented 2 years ago

Can this be used for the input directly? We would need the premutated input from attacks for ARGAN's Xr

daniel-knape commented 2 years ago

Have a look at https://github.com/Caryox/adversial-robustness/blob/8bce87c2cb73a155602c4b854e6893d37633cafc/src/APE-GAN/generate_adversarial_examples.py#L80-L94

Be aware that you need to iterate over each image individually! Use enumarate like in line 82!

Caryox commented 2 years ago

How to use "fmodel = foolbox.models.PyTorchModel(model, bounds=(-1, 1), device=device) " if ensemble use jury voting (i.e. we use 3 standalone classicifations)?