kiddyboots216 / CommEfficient

PyTorch for benchmarking communication-efficient distributed SGD optimization algorithms
71 stars 20 forks source link

The #12

Closed chengyif closed 1 year ago

chengyif commented 1 year ago

I have a question about the attack pattern. In the training process, when one malicious client is sampled to participate in training, does it train the local model based on the malicious dataset (i.e. the auxiliary dataset in the paper) instead of the benign dataset belongs to this client before?

kiddyboots216 commented 1 year ago

Yes that is correct.

chengyif commented 1 year ago

Yes that is correct.

When I run this code, I find that even at the end of the training, there still exists the fluctuation in the attack accuracy (for instance, in the round where malicious clients are not sampled to participate training, the attack accuray may be 0.08, while on the contrary, the attack accuray may be 0.18). What is the criteria to choose the attack accuracy result to report?

kiddyboots216 commented 1 year ago

This is interesting. Could you share your attack parameters?

chengyif commented 1 year ago

My setting has a little difference from the experiments in paper. One of my parameter setting is 100 clients in total and one malicious client. In each round, every client is sampled with probability 0.1. The number of local training epochs in one round is 5. The boosting parameter is 5. Local learning rate is 0.1. Top-10% parameters are retained in the top-k operation. I run the experiment for 2000 communication rounds. The lowest attack accuracy is 0.06 (the malicious client is not sampled in this round). However, in the round where the malicious client is last sampled, the attack accuracy recovers to 0.302.

kiddyboots216 commented 1 year ago

These accuracy numbers seem reasonable to me. I think that you would probably report the accuracy at the round where the malicious client is last sampled. I understand that this is a bit ambiguous: we actually have a paper that addresses the question of 'how do I measure the attack accuracy from the round where the attacker stops participating': https://github.com/jhcknzzm/Federated-Learning-Backdoor/

chengyif commented 1 year ago

I would carefully read this paper to try to understand and solve this question. Great thanks for your detailed reply!