LeiDu-dev / FedSGD

Federated learning via stochastic gradient descent
29 stars 4 forks source link

I modified the client code from the original one to the meta-learning type, but got low testing accuracy on the server. #3

Open ADAM0064 opened 2 years ago

ADAM0064 commented 2 years ago

I 'm doing my research on federated meta-learning, and I decide to follow the type of MAML by using the two step gradient decent trick on the client side. However, when I tried to modify the client code in the repository from the original one to the meta-learning one, I got really low testing accuracy on the server. The accuracy can only raise up to about 20% at most. I 'm wondering why, and I have no idea. Could anyone offer some advice? My client code is pasted as follows. client.zip ?

LeiDu-dev commented 2 years ago

Sorry, I'm so busy at work that it's hard to find time to review your code. But I can give you a debug idea to check that the client gradients are summed correctly. You can first set the number of clients to 2, then print out the gradients of client 0 and client 1 respectively, and compare whether the sum is equal to the sum of the gradients calculated by the program.

KyonQi commented 1 year ago

Same question :(

Have you sovled this problem? It seems that the client model could reach the convergence while the server model could still only get low test acc (0.1 - 0.2).