snwagh / falcon-public

Implementation of protocols in Falcon
89 stars 45 forks source link
multi-party-computation privacy-enhancing-technologies privacy-preserving-machine-learning secure-computation

Falcon: Honest-Majority Maliciously Secure Framework for Private Deep Learning

A maliciously secure framework for efficient 3-party protocols tailored for neural networks. This work builds off SecureNN, ABY3 and other prior works. This work is published in Privacy Enhancing Technologies Symposium (PETS) 2021. Paper available here. If you're looking to run Neural Network training, strongly consider using this GPU-based codebase Piranha.

Table of Contents

Warning


This codebase is released solely as a reference for other developers, as a proof-of-concept, and for benchmarking purposes. In particular, it has not had any security review, has a number of implementational TODOs, has a number of known bugs (especially in the malicious implementation), and thus, should be used at your own risk. You can contribute to this project by creating pull requests and submitting fixes and implementations. The code has not run end-to-end training and we expect this to require some parameter tuning, hence training and inference won't work out of the box (however, inference from pre-trained networks can be repreduced easily).

Requirements


Docker


To install and run Falcon using docker, first build the container: docker build -t falcon . then run docker run -it falcon '/bin/bash'.

From the prompt, you can execute any of the commands specified in Running the code.

Source Code


Repository Structure

Building the code

To build Falcon, run the following commands:

git clone https://github.com/snwagh/falcon-public.git Falcon
cd Falcon
make all -j$(nproc)

Running the code

To run the code, simply choose one of the following options:

Additional Resources


Run combinations

Note that given the size of the larger networks (AlexNet, VGG16) and the need to explicitly define network parameters, these networks can only be run for the CIFAR10 and Tiny ImageNet dataset. On the contrary, the smaller datasets (SecureML, Sarda, MiniONN, and LeNet) can only be run for the MNIST dataset. Running them otherwise should result in assertion errors. The following configuration was sufficient to produce the results for the larger networks: 2.9 GHz Intel Xeon E5-2666 v3 Processor, 36 cores, 60 GB RAM (in particular, a similar processor with 16 GB RAM was insifficient).

Comparison with SecureNN

While a bulk of the Falcon code builds on SecureNN, it differs in two important characterastics (1) Building on replicated secret sharing (RSS) (2) Modularity of the design. The latter enables each layer to self contained in forward and backward pass (in contrast to SecureNN where layers are merged for the networks to be tested). The functions are reasonably tested (including ReLU) however they are more tested for 32-bit datatype so the 64-bit might have minor bugs.

Errors and Issues

If there are compile/installation/runtime errors, please create git issues. Some of the common errors and their resolutions are listed below:

Todos/Bug todos

Citation

You can cite the paper using the following bibtex entry (the paper links to this repo):

@inproceedings{wagh2021falcon,
  title={{FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning}},
  author={Wagh, Sameer and Tople, Shruti and Benhamouda, Fabrice and Kushilevitz, Eyal and Mittal, Prateek and Rabin, Tal},
  journal={Proceedings on Privacy Enhancing Technologies},
  year={2021}
}

For questions, please create git issues; for eventual replies, you can also reach out to swagh@alumni.princeton.edu