jhcknzzm / Federated-Learning-Backdoor

ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341
61 stars 7 forks source link

to sure the trigger logic #13

Open imomoe233 opened 1 year ago

imomoe233 commented 1 year ago

i do not understand why the train_data need to randomcrop after padding 4 use "transforms.RandomCrop(32, padding=4)" At the beginning, i thought that maybe for trigger setting,but i found that you make trigger like "Attack of the Tails: Yes, You Really Can Backdoor Federated Learning" If it's the same as I thought,the poison data change it's tag to index[9] and flip as trigger, i can understand about poison data ,but why benign data need to padding and flip?Final,will a better Hidden trigger may improve the lifespan?

jhcknzzm commented 1 year ago

Actually transforms. RandomCrop() is only used for data augmentation, and in the edge case ("Attack of the Tails: Yes, You Really Can Backdoor Federated Learning") the trigger is actually out-of-distribution data. For example, for the MNIST dataset, the trigger is a number in the other dataset, ARDIS dataset. I envision that a trigger that differs significantly from the distribution of benign data might increase Lifespan, but the conclusion may be the opposite of what I thought, because if the trigger is very different from benign data, backdoor may be easy to be removed when fine-tuning the model with benign data. I don't understand what you mean by hidden triggers.

imomoe233 commented 1 year ago

hidden triggers,i mean,triggers which hid perturbation into dataset not use dataset with wrong table or out-of-distribution data as triggers

jhcknzzm commented 1 year ago

OK, get it