ebagdasa / backdoor_federated_learning

Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)
MIT License
273 stars 65 forks source link

The poisoned dataset. #14

Closed TudouJack closed 2 years ago

TudouJack commented 2 years ago

Hello. Can you please answer one question for me? Do you know why this code is removing the backdoor data?

`

range_no_id = list(range(50000)) for image in self.params['poison_images'] + self.params['poison_images_test']: if image in range_no_id: range_no_id.remove(image)

` The code above is at line 103 in the backdoor_federated_learning/image_helper.py.

And in train.py, the adversary should change the poisoned data's label. If poisoned_data ( = helper.poisoned_data_for_train) has removed those specific images, like green cars, cars with racing stripes and cars with vertically striped walls in the background, how do the adversary change the labels of that specific images? image

Thank you for your help.

ebagdasa commented 2 years ago

it's the last line in your screenshot that changes the label