Closed wnma3mz closed 4 years ago
Hi, thank you for pointing this out. There is a mistake in the README For LG-FedAvg. You must use main_lg to finetune a model trained from main_fed.
I've been reorganizing the code to make it more straightforward to run everything, including loading a federated model for LG-Fedavg. Please check back at the end the next week, when I plan to push the new changes, as well as the new run command.
Thank you for your reply. But I have a new question about Params Communicated
inTable 1
.
According to MNIST model output
# Params: 633226 (local), 99978 (global); Percentage 15.79 (99978/633226)
Then
FEDAVG is 633226 * 750 * 10≈4.74e10
LG-FEDAVG is (99978 * 50 + 633226 * 400) * 10≈2.58e10
The above results are similar to the values in Table 1. But when communicating, isn't the parameter sent and recovered in two steps? I think it should be multiplied by 2 based on the result. Is there a problem with my understanding? Hope you can give me some suggestions! Thank you very much!
Thank you for the open source project. I think this is a very, very important step in federal learning, improving model performance while reducing communication parameters.
But when I use the command 'readme.md'
python main_lgy.py --dataset cifar10 --model CNN --num_classes 10 --epochs 2000 --lr 0.1 --num_users 100 --frac 0.1 --local_ep 1 --local_bs 50 --num_layers_keep 2
I can't seem to get the precision of the results in the paper. (I didn't complete 2000 rounds)
Here are some of the results. This does not seem to reach the accuracy of about 89.66 of cifar-10 in Table1. And the
New Test Acc
has a similar problem.Is this because
Rounds
is not enough? Or is there a problem with my understanding, And I hope someone can help.Thank you again for your outstanding contribution