TsingZ0 / PFLlib

37 traditional FL (tFL) or personalized FL (pFL) algorithms, 3 scenarios, and 20 datasets.
GNU General Public License v2.0
1.35k stars 283 forks source link

Training issue about pFedMe #170

Closed Chen-Junbao closed 6 months ago

Chen-Junbao commented 6 months ago

After comparing your code with the official code released by the authors proposing pFedMe, I noticed a difference in server training.

Specifically, in line 53 of serverpFedMe.py in the official repository, all clients update their local models. However, in line 60 of your serverpFedMe.py, only selected clients update their local model.

I checked the pseudocode of the paper, and it is true that all clients update their models and only selected clients send their local models.

image

I think that this issue can be fixed by changing self.selected_clients to all users, because the server only receives selected clients' models in receive_models function.

TsingZ0 commented 6 months ago

First and foremost, it's important to note that PFLlib is not designed to replicate official algorithm results, but rather to facilitate a fair comparison of existing algorithms. Most federated learning (FL) algorithms presuppose partial client participation, a requirement also stipulated by the foundational FL algorithm FedAvg. Given practical constraints, the assumption that all clients participate and receive the global model in each communication iteration is unrealistic. Therefore, PFLlib employs partial client participation for all algorithms.

Additionally, FL algorithms are applicable to a range of fundamental system settings, including client selection strategies, total client numbers, tasks, model architectures, etc. These represent the hyperparameter settings of the FL system, particularly when the algorithm is not tailored to specific configurations.

Chen-Junbao commented 6 months ago

I see. Thank you for your reply.