ebagdasa / backdoor_federated_learning

Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)
MIT License
273 stars 65 forks source link

Questions about reproducing your experiment and getting the exact experimental results #13

Closed xndong closed 2 years ago

xndong commented 2 years ago

I pull your source code from github and try to experiment it, but I did not get a good experimental results.

___Test Target_ResNet_18 poisoned: False, epoch: 5: Average loss: 2.3041, Accuracy: 1001/10000 (10.0100%) Done in 6.461248397827148 sec.

After every epoch,it always get "Accuracy: 1001/10000 (10.0100%)"

I hope you don’t mind my asking, and could you please give me some general instructions step by step and all the parameter settings in params.yaml for your experiments in your paper.

TudouJack commented 2 years ago

Maybe you should run with params_runner.yaml. The fixed accuracy(10%) means your model is a All-One Matrix.

TudouJack commented 2 years ago

Hello. Can you please answer one question for me? Do you know why this code is removing the backdoor data?

`

range_no_id = list(range(50000)) for image in self.params['poison_images'] + self.params['poison_images_test']: if image in range_no_id: range_no_id.remove(image)

`

The code above is at line 103 in the backdoor_federated_learning/image_helper.py. Thank you for your help.

ebagdasa commented 2 years ago

hey, @xndong sorry again for your problems, I don't support this code anymore and understand the frustration, I think it might be much easier to reproduce the results with newer FL frameworks.

ebagdasa commented 2 years ago

@TudouJack so you don't train on images with correct labels that are part of the poisoned dataset.