Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
MIT License
4.75k stars 1.15k forks source link

Poisoning defense: modified adv. training #1391

Open Nathalie-B opened 2 years ago

Nathalie-B commented 2 years ago

Is your feature request related to a problem? Please describe. What Doesn’t Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors https://arxiv.org/pdf/2102.13624.pdf

Describe the solution you'd like

Describe alternatives you've considered NA

Additional context NA

TS-Lee commented 2 years ago

I will work on this.