twosixlabs / armory-library

Python library for Adversarial ML Evaluation
https://twosixlabs.github.io/armory-library/
MIT License
9 stars 3 forks source link

armory logo


CI PyPI Status Badge PyPI - Python Version License: MIT Docs Code style: black DOI

Overview

Armory is a comprehensive platform for evaluating the robustness of machine learning models against adversarial attacks. It is a pure Python library built on top of existing libraries such as PyTorch, Hugging Face, and IBM's Adversarial Robustness Toolbox (ART). The primary focus of Armory is to help machine learning engineers understand how models behave under various adversarial conditions and how defenses may mitigate these attacks.

History

Armory was developed as part of the Guaranteeing AI Robustness against Deception (GARD) program under the Defense Advanced Research Projects Agency (DARPA). The GARD program's mission was to establish theoretical foundations for machine learning system vulnerabilities, to characterize properties that will enhance system robustness, and to advance the creation of effective defenses.

What is Adversarial AI?

Adversarial AI refers to the manipulation of AI models through carefully crafted inputs designed to exploit vulnerabilities in machine learning algorithms. These inputs are often imperceptible to humans but can cause AI systems to make incorrect decisions, such as misclassifying images or generating incorrect text. For instance, an adversarial attack might slightly alter an image of a stop sign, leading a self-driving car to misinterpret it as a yield sign with potentially catastrophic consequences.

There are various types of adversarial attacks:

The GARD program was established to tackle these threats by developing defensive techniques that make AI systems more robust and resilient to adversarial manipulations. The program brought together industry experts, including Two Six Technologies, IBM and MITRE, along with researchers from academic institutions to explore the limits of adversarial attacks and develop cutting-edge defenses.

Broader Impact

While the GARD program focused on government and military use cases, the potential for adversarial attacks extends to numerous domains, including healthcare, autonomous vehicles, finance, and cybersecurity. Armory is an open-source tool available to the wider AI community, helping researchers and engineers evaluate the robustness of their models across industries. The goal of Armory is to ensure that AI systems used in applications from medical diagnosis to autonomous drones can remain secure and effective even under adversarial conditions.

How It Works

Armory provides a comprehensive platform to evaluate the robustness of AI models against adversarial attacks. It integrates several key features into user-defined pipelines that allow machine learning engineers to conduct robust model evaluations, implement novel attacks and defenses, and visualize results.

pipeline diagram

Data Ingestion and Model Loading

Adversarial Attack Integration

Defensive Techniques

Pipeline Orchestration and Evaluation

Visualization and Exporting Results

Installation & Configuration

pip install armory-library

This is all that is needed to get a working Armory installation. However, Armory-library is a library and does not contain any sample code. We provide examples in the armory-examples repository which is released concurrently with Armory-library.

Examples

The armory-examples repository includes Jupyter notebooks with examples of:

To install the examples, run:

pip install armory-examples

The example source code, along with the Armory-library documentation and API Documentation is a good place to learn how to construct your own evaluations using Armory.

Quick Look

We have provided an sample notebook that uses Armory to evaluate a food101 classifier in the presence of a Project Gradient Descent (PGD) attack. The notebook can be run for free on Google Colab to get a preview of how Armory works.

Open In Colab

Documentation

The Armory-library documentation is published on GitHub or can be viewed directly in the docs directory of this repository. The development team for Armory-library can be reached at armory@twosixtech.com.

The historic GARD-Armory repository

Armory-library is the successor to the GARD-Armory research program run under DARPA. As that program has reached its conclusion, the GARD-Armory repository has been archived sometime in 2024 and there will be no further developmen.

Acknowledgment

This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0114 and US Army (JATIC) Contract No. W519TC2392035. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the DARPA or JATIC.