shaoxiongji / federated-learning

A PyTorch Implementation of Federated Learning http://doi.org/10.5281/zenodo.4321561
http://doi.org/10.5281/zenodo.4321561
MIT License
1.29k stars 372 forks source link

mnist数据集mlp-noniid的运行结果 #37

Open Chrisable opened 2 years ago

Chrisable commented 2 years ago

请问一下大佬,为什么mlp-noniid-mnist第一次测试集运行结果是75%,第二次运行就78%甚至83%+?变化这么大的原因是什么?

第一次结果:

Round   0, Average loss 0.133
Round   1, Average loss 0.097
Round   2, Average loss 0.084
Round   3, Average loss 0.063
Round   4, Average loss 0.075
Round   5, Average loss 0.057
Round   6, Average loss 0.041
Round   7, Average loss 0.049
Round   8, Average loss 0.076
Round   9, Average loss 0.056
Training accuracy: 74.83
Testing accuracy: 75.21

第二次结果:

Round   0, Average loss 0.128
Round   1, Average loss 0.068
Round   2, Average loss 0.099
Round   3, Average loss 0.060
Round   4, Average loss 0.057
Round   5, Average loss 0.070
Round   6, Average loss 0.069
Round   7, Average loss 0.057
Round   8, Average loss 0.066
Round   9, Average loss 0.049
Training accuracy: 78.18
Testing accuracy: 78.39
Pnme79 commented 1 year ago

(python main_fed.py --dataset mnist --iid --num_channels 1 --model mlp --epochs 10 --gpu -1)我的就更离谱 FedAVG-MLP Acc. of IID 运行结果: Round 0, Average loss 2.269 Round 1, Average loss 2.254 Round 2, Average loss 2.261 Round 3, Average loss 2.253 Round 4, Average loss 2.251 Round 5, Average loss 2.246 Round 6, Average loss 2.242 Round 7, Average loss 2.247 Round 8, Average loss 2.239 Round 9, Average loss 2.239 Training accuracy: 11.98 Testing accuracy: 10.56 是什么原因啊?