ybdai7 / Chameleon-durable-backdoor

[ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (https://proceedings.mlr.press/v202/dai23a)"
32 stars 5 forks source link

Question about multiple adversaries selection #10

Closed lu0802mark closed 2 months ago

lu0802mark commented 3 months ago

Hi, I want to ask questions about multiple adversaries select method used. In training.py, to determine one adversary with an identifier of 0 and assign the remaining attackers an identifier of -1, but in subsequent training rounds,the code " _, (current_data_model, train_data) = train_data_sets[model_id]", there are no cases where current_data_model equals -1, how should the issue of selecting multiple adversaries be understood

ybdai7 commented 3 months ago

Hi,

The logic of training with multiple adversaries in this repo is that the adversary only train the poisoned model once (that one is assigned with identifier 0), and skip the training of other attackers (assigned with identifier -1). As you may notice in L106-L107 in training.py, this part simply skip all the training process with the continue command.

I understand that this may not be the way you want when facing multiple adversaries. I suppose you may want, say there are 10 clients for every aggregation, adversaries may control 3 of them and aggregated through 1/10(poisoned+poisoned+poisoned+benign+benign+...). As we do not consider multiple attackers in this paper, we do not modify this part of codes. But i think you may find my recent repo (https://github.com/ybdai7/Backdoor-indicator-defense) useful, as it is a better organized one and we do consider multiple adversaries.

Cheers, Yanbo

lu0802mark commented 3 months ago

Your response is very helpful. Thank you for your explanation!


From: Yanbo Dai @.> Sent: Sunday, June 16, 2024 5:26 AM To: ybdai7/Chameleon-durable-backdoor @.> Cc: lu0802mark @.>; Author @.> Subject: Re: [ybdai7/Chameleon-durable-backdoor] Question about multiple adversaries selection (Issue #10)

Hi,

The logic of training with multiple adversaries in this repo is that the adversary only train the poisoned model once (that one is assigned with identifier 0), and skip the training of other attackers (assigned with identifier -1). As you may notice in L106-L107 in training.py, this part simply skip all the training process with the continue command.

I understand that this may not be the way you want when facing multiple adversaries. I suppose you may want, say there are 10 clients for every aggregation, adversaries may control 3 of them and aggregated through 1/10(poisoned+poisoned+poisoned+benign+benign+...). As we do not consider multiple attackers in this paper, we do not modify this part of codes. But i think you may find my recent repo (https://github.com/ybdai7/Backdoor-indicator-defense) useful, as it is a better organized one and we do consider multiple adversaries.

Cheers, Yanbo

― Reply to this email directly, view it on GitHubhttps://github.com/ybdai7/Chameleon-durable-backdoor/issues/10#issuecomment-2171054300, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BHRN4JHBO26I7ZLTMKPYK4LZHUOX7AVCNFSM6AAAAABJLXGK4CVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNZRGA2TIMZQGA. You are receiving this because you authored the thread.Message ID: @.***>