Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
MIT License
4.88k stars 1.17k forks source link

Refactor unit tests for FGSM, BIM and PGD attacks #270

Closed beat-buesser closed 4 years ago

beat-buesser commented 4 years ago

Refactor unit tests for FGSM, BIM and PGD attacks.

beat-buesser commented 4 years ago

@killianlevacher Thank you very much for working on this issue!