In this repository, code is for our paper EVE: Evil vs Evil: Using Adversarial Examples to Against Backdoor Attack in Federated Learning
Install Python>=3.6.0 and Pytorch>=1.4.0
main.py, update,py, test.py, Fed_aggregation.py
: our FL frameworkmain.py
: entry point of the command line tooloptions.py
: a parser of the FL configs, also assigns possible attackers given the configsbackdoor_data.py
: Implement FL backdoor attackpython main.py --dataset mnist --model lenet5 --num_users 20 --epoch 50 --iid False --attack_start 4 --attack_methods CBA --attacker_list 2 5 7 9 --aggregation_methods EVE --detection_size 50 --gpu 0
Check out parser.py
for the use of the arguments, most of them are self-explanatory.
--attack_methods
to be DBA
, then you will also need to set the number of attackers in --attacker_list
to 4 or a multiple of 4.--aggregation_methods
to be RLR
, then you will also need to set the threshold in --robustLR_threshold
.--save_results
is specified, the training results will be saved under ./results
directory.