inspire-group / ModelPoisoning

Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470
146 stars 37 forks source link

Accuracy check #7

Open ning-wang1 opened 4 years ago

ning-wang1 commented 4 years ago

I did not find the code for accuracy check as mentioned in this paper. Is the 'accuracy check' included in the source code? In other words, will the central server check the accuracy of model updates from different participants before aggregating them?

Further, would you please give the parameters (or a running command) to reproduce the results in the paper on attacking 'krum' and 'coomed' aggregation rule?

TudouJack commented 3 years ago

Hello, did you successfully reproduce the attack on the krum and coomed aggregation rules? I set the parameters as written in the paper, set λ=2 and used alternating minimization when attacked krum, and set λ=1 and used targeted model poisoning when attacked coomed. The 2 experiments all failed. How to set proper parameters to reproduce the results in the paper on attacking 'krum' and 'coomed' aggregation rule?

arjunbhagoji commented 3 years ago

I did not find the code for accuracy check as mentioned in this paper. Is the 'accuracy check' included in the source code? In other words, will the central server check the accuracy of model updates from different participants before aggregating them?

For simplicity, the code as implemented does not discard updates from agents with accuracy lower than a given threshold, as we found that, even with the attack, the accuracy of all agents is satisfactorily high, and will not trigger removal in a realistic setting. That said, this function can be easily implemented and I would urge you to submit a PR if possible.

Further, would you please give the parameters (or a running command) to reproduce the results in the paper on attacking 'krum' and 'coomed' aggregation rule?

The results can be easily reproduced by running the following commands for coordinate-wise median

python dist_train_w_attack.py --dataset=fMNIST - -k=10 --C=1.0 --E=5 --T=40 --train --model_num=0 --gar=coomed --gpu_ids 0

and krum respectively, where LAMBDA=2 to reproduce the results from the paper exactly:

python dist_train_w_attack.py --dataset=fMNIST --k=10 --C=1.0 --E=5 --T=40 --train --model_num=0 --mal --mal_obj=single --mal_strat=converge_train_alternate_wt_o_dist_self --rho=1e-4 --gar=krum --ls=10 --mal_E=10 --gpu_ids 0 --mal_boost=LAMBDA

Note that --gar has been changed from avg to coomed and krum respectively.