jhcknzzm / Federated-Learning-Backdoor

ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341
62 stars 7 forks source link

A question about the results for Cifar-10 #6

Open Yu-shuyan opened 2 years ago

Yu-shuyan commented 2 years ago

We reproduce the experiments for Cifar-10 according to run_backdoor_cv_task.sh. However, we cannot get the same results as in the paper. The backdoor accuracy is shown in the following figure,

Screen Shot 2022-08-15 at 3 11 48 PM

where blue lines are for neurontoxin, green lines are baselines. The backdoor accuracy for baseline (green dash line) does not drop during training, and using neurontoxin even decrease the backdoor accuracy.

We got this result according to the parameters in the code and in the .sh file. We don't know if the parameters used by the authors are different from the settings in it.

jhcknzzm commented 2 years ago

This is weird because in the code about the CV task, we did not add DP, in the paper our results about DP are mainly focused on the Reddit dataset and the neural network used is LSTM. What is the specific command executed in your experiment?

Yu-shuyan commented 2 years ago

For results with DP, we execute two command lines in run_backdoor_cv_task.sh respectively: python main_training.py --run_slurm 0 --GPU_id 0 --start_epoch 1801 --is_poison True --diff_privacy True --s_norm 0.2 --attack_num 250 --gradmask_ratio 0.95 --poison_lr 0.02 --aggregate_all_layer 1 --edge_case 0 python main_training.py --run_slurm 0 --GPU_id 0 --start_epoch 1801 --is_poison True --diff_privacy True --s_norm 0.2 --attack_num 250 --gradmask_ratio 1.0 --poison_lr 0.003 --aggregate_all_layer 1 --edge_case 0 For results without DP, we remove ' --diff_privacy True'. Is this the same command to reproduce the results in your paper?

jhcknzzm commented 2 years ago

Sorry, you can't remove --diff_privacy True, because in fact, the server will perform defense (gradient norm clipping) after setting --diff_privacy True in our code for CV tasks. If you remove --diff_privacy True, then the server will not use defensive strategies. Also, can you try executing the following command (they work well on my machine):

nohup python main_training.py --run_slurm 0 --GPU_id 0 --start_epoch 1801 --is_poison True --diff_privacy True --s_norm 0.2 --attack_num 250 --gradmask_ratio 1.0 --poison_lr 0.003 --aggregate_all_layer 1 --edge_case 0 &

nohup python main_training.py --run_slurm 0 --GPU_id 1 --start_epoch 1801 --is_poison True --diff_privacy True --s_norm 0.2 --attack_num 250 --gradmask_ratio 0.95 --poison_lr 0.02 --aggregate_all_layer 1 --edge_case 0 &

nohup python main_training.py --run_slurm 0 --GPU_id 0 --start_epoch 1801 --is_poison True --diff_privacy True --s_norm 0.2 --attack_num 250 --gradmask_ratio 0.99 --poison_lr 0.02 --aggregate_all_layer 1 --edge_case 0 &

nohup python main_training.py --run_slurm 0 --GPU_id 1 --start_epoch 1801 --is_poison True --diff_privacy True --s_norm 0.2 --attack_num 250 --gradmask_ratio 0.97 --poison_lr 0.02 --aggregate_all_layer 1 --edge_case 0 &

Yu-shuyan commented 2 years ago

Thanks, we will execute the above command. Besides, I wonder what is the gradmask_ratio you used for Figure 17 (left) Neurontoxin?

jhcknzzm commented 2 years ago

I think it is gradmask_ratio==0.95. This setting does work fine on my machine, but maybe you can try it with a different gradmask_ratio, because maybe there is some randomness in the experiment.