Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
MIT License
4.88k stars 1.17k forks source link

Inputs for the attacks #514

Closed matdtr closed 3 years ago

matdtr commented 4 years ago

Hi i want a clarification, to use the attacks inside of the library I should give as input the same input as the paper? Or for example i can give audio inputs to the Deepfool attack?

mathsinn commented 4 years ago

Hi @matdtr, in principle you can provide other types of inputs. E.g. check out these notebooks where PGD is applied to audio and video inputs: https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/main/notebooks/adversarial_audio_examples.ipynb https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/main/notebooks/adversarial_action_recognition.ipynb

Deepfool should be applicable similarly.

matdtr commented 4 years ago

thanks, I saw that notebooks but when i try to use Deepfool or any other I just get a Killed signal, instead when I use FGSM on my inputs is working. However the adv examples are quite similar to my inputs so the accuracy on my system is almost the same.

mathsinn commented 4 years ago

If you get a Killed signal, that's an exception outside of ART - maybe your process ran out of memory? To increase the attack success rate, have you tried increasing the FGMS eps parameter?

matdtr commented 4 years ago

I will try to look into it! Yes I've tried with 0.1,1,10. But on 10k examples only 800 were successful. Thanks for the help

Last thin I saw that on PyTorchClassifier I can choose a loss function, why it's not possible to do it with KerasClassifier? Also is it possible to change loss function inside the attacks?