git-disl / Lockdown

A backdoor defense for federated learning via isolated subspace training (NeurIPS2023)
14 stars 0 forks source link

Lockdown: Backdoor Defense for Federated Learning with Isolated Subspace Training

This is the repo for the code and datasets used in the paper Lockdown: Backdoor Defense for Federated Learning with Isolated Subspace Training, accepted by the NeurIPS 2023. The camera ready paper is available here.

Algorithm overview

The overall procedure can be summarized into four main steps. i) Isolated subspace training. ii)Subspace searching. iii) Aggregation. iv) Model cleaning with consensus fusion. The following figure illustrates the overall process.

Get started

Package requirement

Data preparation

Dataset FashionMnist and CIFAR10/100 will be automatically downloaded with TorchVision.

Command to run

The following code run lockdown in its default setting

python federated.py  --method lockdown 

You can also find script in directory src/script.

Files organization

Logging and checkpoint

The logging files will be contained in src/logs. Benign accuracy, ASR, and Backdoor accuracy will be tested in every round. For Lockdown, the three metrics correspond to the following logging format:

| Clean Val_Loss/Val_Acc: (Benign loss) / (Benign accuracy) |
| Clean Attack Success Ratio: (ASR loss)/ (ASR) |
| Clean Poison Loss/Clean Poison accuracy:: (Backdoor Loss)/ (Backdoor Acc)|

Model checkpoints will be saved every 25 rounds in the directory src/checkpoint.

Q&A

If you have any questions, you can either open an issue or contact me (thuang374@gatech.edu), and I will reply as soon as I see the issue or email.

Acknowledgment

The codebase is modified and adapted from one of our baselines RLR.

License

Lockdown is completely free and released under the MIT License.