ebagdasa / backdoor_federated_learning

Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)
MIT License
273 stars 65 forks source link

In your image experiments are you use same poison data for both train and test? #3

Closed zzs1324 closed 5 years ago

zzs1324 commented 5 years ago

Hi, I am trying to implement the attack as the paper, but found out if split the poison data for train and test, I cannot get a good poison model( either it will show high backdoor acc but low main task acc or show low acc in backdoor but high in the main task). With so little data, I am not understanding how the attacker model can learn poison data's semantic feature.

ebagdasa commented 5 years ago

I think that was the most trickiest experiment to do: I used three images with perturbations and rotated differently for the test data that is unseen during training. For the training I've used remaining images with as well some noised perturbations (this method is just taken from the backdoor paper) so from 20 images I get around 1000 images for the backdoor dataset. And I think overall the feature is relatively simple to learn: e.g. green cars or stripes on the background. In order to balance accuracy on both normal and backdoors I've slowed down the learning rate during training from 0.1 to 0.001 so it finetunes to perform well on two different tasks.