Closed matdtr closed 3 years ago
Hi @matdtr, in principle you can provide other types of inputs. E.g. check out these notebooks where PGD is applied to audio and video inputs: https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/main/notebooks/adversarial_audio_examples.ipynb https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/main/notebooks/adversarial_action_recognition.ipynb
Deepfool should be applicable similarly.
thanks, I saw that notebooks but when i try to use Deepfool or any other I just get a Killed signal, instead when I use FGSM on my inputs is working. However the adv examples are quite similar to my inputs so the accuracy on my system is almost the same.
If you get a Killed signal, that's an exception outside of ART - maybe your process ran out of memory? To increase the attack success rate, have you tried increasing the FGMS eps
parameter?
I will try to look into it! Yes I've tried with 0.1,1,10. But on 10k examples only 800 were successful. Thanks for the help
Last thin I saw that on PyTorchClassifier I can choose a loss function, why it's not possible to do it with KerasClassifier? Also is it possible to change loss function inside the attacks?
Hi i want a clarification, to use the attacks inside of the library I should give as input the same input as the paper? Or for example i can give audio inputs to the Deepfool attack?