jhcknzzm / Federated-Learning-Backdoor

ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341
61 stars 7 forks source link

The question about the testing dataset #5

Open Alan-Qin opened 2 years ago

Alan-Qin commented 2 years ago

https://github.com/jhcknzzm/Federated-Learning-Backdoor/blob/a7ef36afc5c5dfe7dbb233e8d7f35c141cefeffb/FL_Backdoor_CV/image_helper.py#L271

I have a question about the testing dataset about backdoor attack accuracy on the FEMNIST.

From this line of code, the author seems to use the training data set directly as the test data set.

jhcknzzm commented 2 years ago

Hello, thank you for reading our code carefully. Of course, as a machine learning task, generalization ability is still very important. We updated our code to clear up confusion. Please refer to if self.params['dataset'] == 'emnist':.

Designating malicious datasets to serve as both training and test sets is not unfounded. In section 3.2 of the paper [SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification] (Semantic backdoor via model poisoning:) the authors mentioned that they used 3000 images of the number 7 in the validation set as backdoors. It is also described in Appendix B.5 Stealth of Attack of the paper. In their code (https://github.com/kiddyboots216/CommEfficient/blob/ca4d44098b4251d598fbd99edfe5c6f5e60fa6ad/CommEfficient/data_utils/fed_emnist.py) the malicious training and test sets are also specified in this way. This is mainly because backdooring tasks usually focus on the neural network's memory of some poisoned data. The code of the earliest work on federated learning backdoor attacks (https://github.com/ebagdasa/backdoor_federated_learning/blob/master/utils/params_runner.yaml) also has a similar approach, specifying that the malicious training samples are consistent with the malicious testing samples. It's not unreasonable to do so, since it's usually concerned with the model's ability to remember or forget backdoor samples.

However, we still encourage researchers to study both the same and different designated malicious test and training sets, as they reflect the model's pure memory vs. fine-tuning and generalization vs. fine-tuning, respectively.

Alan-Qin commented 2 years ago

Thanks for your response.

  1. I have the question: for ICML camera-ready version, is the attack performance tested in the original poisoning training dataset? I think the author should clarify this point.

  2. Actually, for backdoor learning, the main goal is to force the ML model to memorize the trigger pattern added by the attackers rather than the poisoned training examples [1]. Therefore, the correct way to measure attack success rate (ASR) is to test the poisoned testing dataset rather than the original poisoned training set. Actually, for previous work [2,3], they measure ASR on testing dataset.

  3. I agree that the memorization of poisoning training samples is also important. However, it can not totally reflect ASR. And, we should follow the standard definition of attacks rather than the previous work, since they may be wrong.

  4. I am looking forward to your reply and clarification.

[1] Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses [2] Attack of the Tails: Yes, You Really Can Backdoor Federated Learning [3] How to Backdoor Federated Learning

zihao-ai commented 2 years ago

To be honest, if the results on the FEMNIST in the paper are from this line of code, I think this is cheating.

jhcknzzm commented 2 years ago

For you mentioned EMNIST task, in the base case the attacker's goal is to get the final model to misclassify certain datapoints, and their training dataset is the same as their test dataset. The trigger at this time is not a pixel pattern. For the base case mentioned in the paper, we process the image dataset in the same way as SparseFed.

Alan-Qin commented 2 years ago

For the EMNIST task, as shown in [2], the authors separate the Ardis dataset into training and testing. Therefore, their training dataset is not the same as their test dataset. I think you should clarify this point in the paper.

jhcknzzm commented 2 years ago

Yeah, thanks. For EMNIST task in the edge case, the training data is not the same as the test data, we will highlight this when we update our paper, we have updated the code to eliminate bugs caused by merging different versions of the code.
This version of the code still looks redundant. We are trying to make it look concise. If you are patient enough, you can also optimize some of them.

jhcknzzm commented 2 years ago

Train_Test_same_different_backdoor_acc

For EMNITS dataset we also did the following additional experiment: in the experiment, the poisoned data is the pictures of the number 7 of the Ardis dataset (the dataset of edge cases), the target label set by the attacker is 1, AttackNum=200 (the attacker participates in 200 federated learning), and the server uses gradient norm clipping along with differential privacy defense strategies, the results are shown in this figure.

In the figure, "train=test" means that the test set of the attacker is the same as the training set, and "train≠test" means that the test set and the training set are different. From the results, it can be found that: 1. Setting the test set to be the same or different from the training set can lead to the same conclusion as in our paper: Neurotoxin is better than the baseline. 2. There is no significant difference in backdoor accuracy between the two settings.