Hello. Can you please answer one question for me? Do you know why this code is removing the backdoor data?
`
range_no_id = list(range(50000))
for image in self.params['poison_images'] + self.params['poison_images_test']:
if image in range_no_id:
range_no_id.remove(image)
`
The code above is at line 103 in the backdoor_federated_learning/image_helper.py.
And in train.py, the adversary should change the poisoned data's label. If poisoned_data ( = helper.poisoned_data_for_train) has removed those specific images, like green cars, cars with racing stripes and cars with vertically striped walls in the background, how do the adversary change the labels of that specific images?
Hello. Can you please answer one question for me? Do you know why this code is removing the backdoor data?
`
range_no_id = list(range(50000)) for image in self.params['poison_images'] + self.params['poison_images_test']: if image in range_no_id: range_no_id.remove(image)
` The code above is at line 103 in the backdoor_federated_learning/image_helper.py.
And in train.py, the adversary should change the poisoned data's label. If poisoned_data ( = helper.poisoned_data_for_train) has removed those specific images, like green cars, cars with racing stripes and cars with vertically striped walls in the background, how do the adversary change the labels of that specific images?
Thank you for your help.