Trusted-AI / adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
MIT License
4.6k stars 1.13k forks source link

Adding examples for computing robustness metrics #504

Open rezacsedu opened 3 years ago

rezacsedu commented 3 years ago

Re: https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/main/art/metrics/metrics.py, could you please add examples of computing different adversarial robustness metrics, e.g., empirical_robustness, CLEVER, loss_sensitivity, wasserstein_distance?

beat-buesser commented 3 years ago

Hi @rezacsedu Thank you very much for your interest in ART! I think that's a great idea!

We have a notebook for RobustnessVerificationTreeModelsCliqueMethod in https://github.com/Trusted-AI/adversarial-robustness-toolbox/blob/main/notebooks/robustness_verification_clique_method_tree_ensembles_gradient_boosted_decision_trees_classifiers.ipynb

Would you be interested to help developing new examples for other metrics?

rezacsedu commented 3 years ago

@beat-buesser, sure I could try. If I come accross something useful, I'll get back to you.

beat-buesser commented 3 years ago

@rezacsedu That's great! Also let us know anytime if you have any questions about these metrics. We'll try to include something in the upcoming 1.4 release.