Closed iuserea closed 4 years ago
@iuserea It seems your non-IID version has lower accuracy. For this case, you need to turn your hyper-parameters. You can use grid search to find a better learning rate, batch size, local epoch, rounds. Normally, IID and non-IID should use totally different hypera-parameters.
@iuserea have you resolved your issue?
@chaoyanghe I can't thank you enough! After trying grid search and random search,the acc became 40%.
@iuserea sounds good. Can you get results reported from this paper: https://arxiv.org/pdf/2003.00295.pdf ?
@chaoyanghe it's possible to get results from this paper.I've read it before.But the acc there is about 57% which is more than 40% of mine.
Code: just insert one line of code in the file FedML/fedml_experiments/distributed/fedavg/mian_fedavg.py as shown in the fig1 below, others are just based on the latest orgin/master.
Cmd: sh run_fedavg_distributed_pytorch.sh 10 10 1 8 rnn hetero 100 10 10 0.8 shakespeare "./../../../data/shakespeare" 0
Result:
wandb:
Quesion: From wandb's screen shot,we can also notice that train/acc and test/acc are not so high. Is it normal?Does your guys have any advice to solve this? Thanks for your attention.