TinfoilHat0 / Defending-Against-Backdoors-with-Robust-Learning-Rate

The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".
https://ojs.aaai.org/index.php/AAAI/article/view/17118
MIT License
30 stars 24 forks source link

Unstable Results Over 500 Round Experiments #4

Open AndrewMerrow opened 3 months ago

AndrewMerrow commented 3 months ago

I have tried running tests on the fedemnist dataset with the default parameters from the runner.sh file. In my 500 round tests, the model's accuracy starts to degrade after approximately round 100.

UTD 500 round graph

I have ran experiments in two separate environments and have tried tweaking some parameters, but the results I am getting all show the same issue. Here are the library versions I am using:

NVIDIA PyTorch Container version 22.12 PyTorch version 1.14.0+410ce96 Python3 version 3.8.10

TinfoilHat0 commented 2 months ago

Hey Andrew - the plot makes me this this a learning rate problem. Are you decaying the learning rate?

AndrewMerrow commented 2 months ago

I have not decayed the learning rate. Here are the values I have used: server_lr: 1 client_lr: 0.1

AndrewMerrow commented 2 months ago

We used all the parameters straight from the provided runner.sh file.