robust-ml / robust-ml.github.io

A community-run reference for state-of-the-art adversarial example defenses.
https://www.robust-ml.org/
Creative Commons Attribution Share Alike 4.0 International
49 stars 7 forks source link

Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks #13

Closed max-andr closed 4 years ago

max-andr commented 4 years ago

Name: Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks

Authors: Maksym Andriushchenko, Matthias Hein

Paper: https://arxiv.org/abs/1906.03526

Code: https://github.com/max-andr/provably-robust-boosting

Venue: NeurIPS 2019

Does the code implement the robust-ml API and include pre-trained models: yes

Dataset: MNIST, FMNIST

Threat model: Linf (ϵ=0.3), Linf (ϵ=0.1)

Natural accuracy: 97.32%, 85.85% clean accuracy

Claims: 87.54%, 76.83% provable accuracy

The pairs of numbers mentioned always correspond to MNIST / FMNIST models respectively.

Thanks.

anishathalye commented 4 years ago

Thank you for the submission! It's on the site now: https://www.robust-ml.org/defenses/

I reworded "provable accuracy" to "certified" for consistency with other items in the table.