Verified-Intelligence / alpha-beta-CROWN

alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, 2023, and 2024)
Other
235 stars 58 forks source link
adversarial-examples adversarial-robustness formal-verification neural-network-verification neural-networks robustness robustness-verification

α,β-CROWN (alpha-beta-CROWN): A Fast and Scalable Neural Network Verifier with Efficient Bound Propagation

α,β-CROWN (alpha-beta-CROWN) is a neural network verifier based on an efficient linear bound propagation framework and branch and bound. It can be accelerated efficiently on GPUs and can scale to relatively large convolutional networks (e.g., millions of parameters). It also supports a wide range of neural network architectures (e.g., CNN, ResNet, and various activation functions), thanks to the versatile auto_LiRPA library developed by us. α,β-CROWN can provide provable robustness guarantees against adversarial attacks and can also verify other general properties of neural networks, such as Lyapunov stability in control.

α,β-CROWN is the winning verifier in VNN-COMP 2021, VNN-COMP 2022, and VNN-COMP 2023 (International Verification of Neural Networks Competition) with the highest total score, outperforming many other neural network verifiers on a wide range of benchmarks over 3 years. Details of competition results can be found in VNN-COMP 2021 slides, report, VNN-COMP 2022 report, VNN-COMP 2023 slides and report.

The α,β-CROWN team is created and led by Prof. Huan Zhang at UIUC with contributions from multiple institutions. See the list of contributors below. α,β-CROWN combines our efforts in neural network verification in a series of papers building up the bound propagation framework since 2018. See Publications below.

Supported Features

Our verifier consists of the following core algorithms:

The bound propagation engine in α,β-CROWN is implemented as a separate library, auto_LiRPA (Xu et al. 2020), for computing symbolic bounds for general computational graphs. We support these neural network architectures:

We support the following verification specifications:

We provide many example configurations in complete_verifier/exp_configs directory to start with:

See the Guide on Algorithm Selection to find the most suitable example to get started.

Installation and Setup

α,β-CROWN is tested on Python 3.11 and PyTorch 2.2.1 (lower versions, including Python 3.7 and PyTorch 1.11, may also work). It can be installed easily into a conda environment. If you don't have conda, you can install miniconda.

Clone our verifier including the auto_LiRPA submodule:

git clone --recursive https://github.com/Verified-Intelligence/alpha-beta-CROWN.git

Setup the conda environment:

# Remove the old environment, if necessary.
conda deactivate; conda env remove --name alpha-beta-crown
# install all dependents into the alpha-beta-crown environment
conda env create -f complete_verifier/environment.yaml --name alpha-beta-crown
# activate the environment
conda activate alpha-beta-crown

If you use the CROWN, α-CROWN and/or β-CROWN verifiers (which cover the most use cases), a Gurobi license is not needed. If you want to use MIP-based verification algorithms (which are feasible only for small models), you need to install a Gurobi license with the grbgetkey command. If you don't have access to a license, by default, the above installation procedure includes a free and restricted license, which is actually sufficient for many relatively small NNs. If you use the GCP-CROWN verifier, an installation of IBM CPlex solver is required. Instructions to install the CPlex solver can be found in the VNN-COMP benchmark instructions or the GCP-CROWN instructions.

If you prefer to install packages manually rather than using a prepared conda environment, you can refer to this installation script.

If you want to run α,β-CROWN verifier on the VNN-COMP benchmarks (e.g., to make a comparison to a new verifier), you can follow this guide.

Instructions

We provide a unified front-end for the verifier, abcrown.py. All parameters for the verifier are defined in a yaml config file. For example, to run robustness verification on a CIFAR-10 ResNet network, you just run:

conda activate alpha-beta-crown  # activate the conda environment
cd complete_verifier
python abcrown.py --config exp_configs/tutorial_examples/cifar_resnet_2b.yaml

You can find explanations for the most useful parameters in this example config file. For detailed usage and tutorial examples, please see the Usage Documentation. We also provide a large range of examples in the complete_verifier/exp_configs folder.

Publications

If you use our verifier in your work, please kindly cite our papers:

α,β-CROWN combines our existing efforts on neural network verification:

We provide bibtex entries below:

@article{zhang2018efficient,
  title={Efficient Neural Network Robustness Certification with General Activation Functions},
  author={Zhang, Huan and Weng, Tsui-Wei and Chen, Pin-Yu and Hsieh, Cho-Jui and Daniel, Luca},
  journal={Advances in Neural Information Processing Systems},
  volume={31},
  pages={4939--4948},
  year={2018},
  url={https://arxiv.org/pdf/1811.00866.pdf}
}

@article{xu2020automatic,
  title={Automatic perturbation analysis for scalable certified robustness and beyond},
  author={Xu, Kaidi and Shi, Zhouxing and Zhang, Huan and Wang, Yihan and Chang, Kai-Wei and Huang, Minlie and Kailkhura, Bhavya and Lin, Xue and Hsieh, Cho-Jui},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}

@article{salman2019convex,
  title={A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks},
  author={Salman, Hadi and Yang, Greg and Zhang, Huan and Hsieh, Cho-Jui and Zhang, Pengchuan},
  journal={Advances in Neural Information Processing Systems},
  volume={32},
  pages={9835--9846},
  year={2019}
}

@inproceedings{xu2021fast,
    title={{Fast and Complete}: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers},
    author={Kaidi Xu and Huan Zhang and Shiqi Wang and Yihan Wang and Suman Jana and Xue Lin and Cho-Jui Hsieh},
    booktitle={International Conference on Learning Representations},
    year={2021},
    url={https://openreview.net/forum?id=nVZtXBI6LNn}
}

@article{wang2021beta,
  title={{Beta-CROWN}: Efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification},
  author={Wang, Shiqi and Zhang, Huan and Xu, Kaidi and Lin, Xue and Jana, Suman and Hsieh, Cho-Jui and Kolter, J Zico},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  year={2021}
}

@InProceedings{zhang22babattack,
  title =    {A Branch and Bound Framework for Stronger Adversarial Attacks of {R}e{LU} Networks},
  author =       {Zhang, Huan and Wang, Shiqi and Xu, Kaidi and Wang, Yihan and Jana, Suman and Hsieh, Cho-Jui and Kolter, Zico},
  booktitle =    {Proceedings of the 39th International Conference on Machine Learning},
  volume =   {162},
  pages =    {26591--26604},
  year =     {2022},
}

@article{zhang2022general,
  title={General Cutting Planes for Bound-Propagation-Based Neural Network Verification},
  author={Zhang, Huan and Wang, Shiqi and Xu, Kaidi and Li, Linyi and Li, Bo and Jana, Suman and Hsieh, Cho-Jui and Kolter, J Zico},
  journal={Advances in Neural Information Processing Systems},
  year={2022}
}

@article{shi2024genbab,
  title={Neural Network Verification with Branch-and-Bound for General Nonlinearities},
  author={Shi, Zhouxing and Jin, Qirui and Kolter, Zico and Jana, Suman and Hsieh, Cho-Jui and Zhang, Huan},
  journal={arXiv preprint arXiv:2405.21063},
  year={2024}
}

@inproceedings{kotha2023provably,
 author = {Kotha, Suhas and Brix, Christopher and Kolter, J. Zico and Dvijotham, Krishnamurthy and Zhang, Huan},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {A. Oh and T. Neumann and A. Globerson and K. Saenko and M. Hardt and S. Levine},
 pages = {80270--80290},
 publisher = {Curran Associates, Inc.},
 title = {Provably Bounding Neural Network Preimages},
 url = {https://proceedings.neurips.cc/paper_files/paper/2023/file/fe061ec0ae03c5cf5b5323a2b9121bfd-Paper-Conference.pdf},
 volume = {36},
 year = {2023}
}

Developers and Copyright

The α,β-CROWN verifier is currently being developed by a multi-institutional team:

Team lead:

Current developers:

Past developers:

The team acknowledges the financial and advisory support from Prof. Zico Kolter (zkolter@cs.cmu.edu), Prof. Cho-Jui Hsieh (chohsieh@cs.ucla.edu), Prof. Suman Jana (suman@cs.columbia.edu), Prof. Bo Li (lbo@illinois.edu), and Prof. Xue Lin (xue.lin@northeastern.edu).

Our library is released under the BSD 3-Clause license. A copy of the license is included here.