robust-ml / robust-ml.github.io

A community-run reference for state-of-the-art adversarial example defenses.
https://www.robust-ml.org/
Creative Commons Attribution Share Alike 4.0 International
49 stars 7 forks source link

Added DiffAI #1

Closed mmirman closed 5 years ago

mmirman commented 6 years ago

Name: Differentiable Abstract Interpretation for Provably Robust Neural Networks

Authors: Matthew Mirman, Timon Gehr, Martin Vechev

Paper: https://www.sri.inf.ethz.ch/papers/icml18-diffai.pdf

Code: https://github.com/eth-sri/diffai

Venue: ICML 2018

Does the code implement the robust-ml API and include pre-trained models: no

Dataset: MNIST

Threat model: $l_\infty(\epsilon = 0.1)$

Natural accuracy: 99%

Claims: 96.4% proved

anishathalye commented 6 years ago

Thank you for submitting this -- we'd love to include it in the listing.

We require that both defenses and attacks implement the robustml interface. Would you be able to add this? It usually takes only several minutes and tens of lines of code (see an example here).

mmirman commented 6 years ago

Sure, we'll do that

anishathalye commented 6 years ago

Awesome! Ping us when you're done, and we'll add the defense to the list.

anishathalye commented 6 years ago

Bump -- do you have any questions or issues with implementing the interface?

mmirman commented 6 years ago

On pause until we update our repo with some larger changes

anishathalye commented 5 years ago

Bump - any update on this?