ebagdasa / backdoor_federated_learning

Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)
MIT License
273 stars 65 forks source link

A program problem #18

Open mumu029 opened 1 year ago

mumu029 commented 1 year ago

It seems to me that poison_dataset() didn't poison the data, it just sampled 64 images 200 times and required them not to be images from "posion_image" and "poison_images_test". But what does that have to do with data poisoning.

RiverCrosser-White commented 1 year ago

i agree with you, i think maybe some related code was commented out.

YH-star1 commented 8 months ago

Without finding the real poisoning of the data, I guess it is possible that the author used the poisoning method of "using the picture itself as a trigger", but I have little experience, just guess.