Closed Pengxin-Guo closed 1 year ago
Sorry for the delayed response. I have been quite busy recently.
Please note line 19, where a specific optimizer PerturbedGradientDescent is used to accelerate training.
Hello, for algorithms like FedProx and SCAFFOLD, the special optimizer they use is not as effective as the Adam optimizer I used in other algorithms. Can this be used as a comparative experiment?
Sure, you can adopt any optimizer as you want by modifying their codes and study whether other optimizers are more suitable.
In ths clientprox.py, is there a problem with the calculation of loss in lines 47-49,? Shouldn't we use the loss calculation method in lines 82-86?