woodenchild95 / FL-Simulator

Pytorch implementations of some general federated optimization methods.
34 stars 5 forks source link

Questions about FedDyn #7

Closed knight-fzq closed 5 months ago

knight-fzq commented 10 months ago

Hi,i learn a lot from your work but now i have a question about FedDyn implement. Here is the command:

CUDA_VISIBLE_DEVICES=0 python train.py --non-iid --dataset CIFAR10 --model ResNet18 \ --split-rule Pathological --split-coef 6 --active-ratio 0.1 --total-client 100 --method FedDyn \ --local-epochs 5 --comm-rounds 800 --lr-decay 0.9995 --lamb 0.1

Is it the same with the setting in your paper, FedSMOO?

woodenchild95 commented 10 months ago

@knight-fzq Thank you for paying attention to our work. For all the ADMM-based methods, I remember the learning rate decay should be selected larger a lot than it in those SGD-based methods. I try a line search between [0.995, 0.99995], and some selections like 0.9995, and 0.9998 may work better. The proxy coefficient could be selected as 0.1/0.01. Hope these can help you!