jhcknzzm / Federated-Learning-Backdoor

ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341
61 stars 7 forks source link

why acc about CV is 0 #12

Closed imomoe233 closed 1 year ago

imomoe233 commented 1 year ago

when i run " python main_training.py --run_slurm 0 --GPU_id 0 --start_epoch 1801 --is_poison True --defense True --s_norm 0.2 --attack_num 250 --gradmask_ratio 0.99 --poison_lr 0.02 --aggregate_all_layer 1 --edge_case 0 "

why the test backdoor accuarcy on attacker's train data and test data always 0%?? the log is below

Test poisoned ( traget label 9 ): True, epoch: 1866: Average loss: 5.7599, Accuracy: 0.0/512.0 (0%) Test poisoned ( traget label 9 ): True, epoch: 1866: Average loss: 5.7599, Accuracy: 0.0/512.0 (0%) epoch 1866 test poison loss (after fedavg) 5.759857177734375 test poison acc (after fedavg) 0.0 train poison loss (after fedavg) 5.759856879711151 train poison acc (after fedavg) 0.0

jhcknzzm commented 1 year ago

It looks weird. The loss value is huge. Can you increase the learning rate, for example, set the learning rate to 0.2, or increase the s_norm, for example, s_norm==2.0, or you need to train longer, for example, set attack_num==400.

imomoe233 commented 1 year ago

it works thanks,but when i run this parameters like below,the benign acc will from 86 to 60 (that trend start at 1970epochs ,and the backdoor acc arrived 100% at 2000epochs ) But it will back to 80 at 2200epochs.Did u meet that issue?Its ok? May i need change my attack_num to let attack earlystop when the attack_acc arrived at 100%?

"python main_training.py --run_slurm 0 --GPU_id 0 --start_epoch 1801 --is_poison True --defense True --s_norm 0.2 --attack_num 250 --gradmask_ratio 0.95 --poison_lr 0.02 --aggregate_all_layer 1 --edge_case 0"

jhcknzzm commented 1 year ago

I have not encountered this phenomenon. Although I can't know the exact reason, I feel that it may be related to many factors, such as s_norm (too small or too large s_norm may reduce benign accuracy), too small model capacity (the small model has a limited capacity, so it remembers backdoor samples while some benign samples are forgotten), maybe you could try to change s_norm or use a larger neural network model. But the early stop you mentioned is really worth trying. However, what I envision is that if the attack is stopped as soon as the backdoor accuracy reaches 100%, the backdoor may be easily forgotten (removed).

imomoe233 commented 1 year ago

Thank you for your patience in answering