CharlieDinh / pFedMe

Personalized Federated Learning with Moreau Envelopes (pFedMe) using Pytorch (NeurIPS 2020)
289 stars 89 forks source link

Is Per-FedAvg implemented properly? #8

Closed chuanting closed 3 years ago

chuanting commented 3 years ago

In the code, when training Per-FedAvg, there are two steps, and each step sample a batch of data and perform parameter update. But in the MAML framework, I think the first step is to obtain a fast weight, and the second step is to update the parameters based on the fast weight of the first step. So why do you update the parameters two times? Are the fundamental differences between Per-FedAvg and FedAvg lie in that the former performs two steps update and the latter performs a one-step update? Is this fair for FedAvg?

CharlieDinh commented 3 years ago

image Hi. In Per-FedAvg paper, there are 2 ways to implement a second update. We implemented the first one when the hessian is approximated by gradient descent update with a new mini-batch. So the second step simply is just new gradient descent update with new batch size and parameter \beta.

CharlieDinh commented 3 years ago

When the code was implemented, the previous version of this paper only propose the first way to approximate Hessian, so we didn't implement the second approach.

CharlieDinh commented 3 years ago

The fundamental idea of both pFedMe and Per-FedAvg is to generate a well-generalized global model which allowed personalizing this global model to make the personalized model using 1-k steps (small steps). Also in Per-Fedavg, it is trained one batch of test data before evaluation. So, the personalized model is always better than the global only.

chuanting commented 3 years ago

image Hi. In Per-FedAvg paper, there are 2 ways to implement a second update. We implemented the first one when the hessian is approximated by gradient descent update with a new mini-batch. So the second step simply is just new gradient descent update with new batch size and parameter \beta.

Hi Canh, Thanks for your reply. But I could not agree with you. Please see below:

problem
chuanting commented 3 years ago

Please see above for details.

sshpark commented 3 years ago

image Hi. In Per-FedAvg paper, there are 2 ways to implement a second update. We implemented the first one when the hessian is approximated by gradient descent update with a new mini-batch. So the second step simply is just new gradient descent update with new batch size and parameter \beta.

Hi Canh, Thanks for your reply. But I could not agree with you. Please see below:

problem

I agree with your point. In FO Per-FedAvg, the second update (i.e., update the meta-model) relies on the loss and the gradient . The following matlab code is provided by the author of Per-FedAvg.

%Local updates

for i=A

    lw12 = w12;
    lw23 = w23;
    lw34 = w34;
    lb12 = b12;
    lb23 = b23;
    lb34 = b34;

    for t=1:tau

        B_1 = randperm(user_l(i), D_i);

        [lgw12,lgw23,lgw34, lgb12,lgb23, lgb34] = grad_batch (lw12,lw23,lw34,lb12,lb23,lb34, Dat(:,B_1,i), Lab(:,B_1,i), D_i);

        B_2 = randperm(user_l(i), D_o);

        [lgw12,lgw23,lgw34, lgb12,lgb23, lgb34] = grad_batch (lw12-al*lgw12,lw23-al*lgw23,lw34-al*lgw34,lb12-al*lgb12,lb23-al*lgb23,lb34-al*lgb34, Dat(:,B_2,i), Lab(:,B_2,i), D_o);

        lw12 = lw12 - be*lgw12;
        lw23 = lw23 - be*lgw23;
        lw34 = lw34 - be*lgw34;
        lb12 = lb12 - be*lgb12;
        lb23 = lb23 - be*lgb23;
        lb34 = lb34 - be*lgb34;

    end

    slw12 = slw12 + lw12/(r*n);
    slw23 = slw23 + lw23/(r*n);
    slw34 = slw34 + lw34/(r*n);
    slb12 = slb12 + lb12/(r*n);
    slb23 = slb23 + lb23/(r*n);
    slb34 = slb34 + lb34/(r*n);

end
CharlieDinh commented 3 years ago

Hi @sshpark and @chuanting;

Thank all.

Yes, I think my implementation is not correct in terms of the second update in PerfedAvg and @chuanting found out this issue. I am planning to update it after I finished my current project. But if anyone can contribute to fix this issue, it would be wonderfull.

Regards, Canh Dinh

sshpark commented 3 years ago

Hi @sshpark and @chuanting;

Thank all.

Yes, I think my implementation is not correct in terms of the second update in PerfedAvg and @chuanting found out this issue. I am planning to update it after I finished my current project. But if anyone can contribute to fix this issue, it would be wonderfull.

Regards, Canh Dinh

Hi @CharlieDinh; Actually, I have tried to fix this issue and pulled a request. You could check it out if you have a minute.

Regards, sshpark

CharlieDinh commented 3 years ago

Hi @sshpark,

Thanks for your commit. I just merged it to the main branch. Have you tested the performance of PerfedAvg?

sshpark commented 3 years ago

Hi @sshpark,

Thanks for your commit. I just merged it to the main branch. Have you tested the performance of PerfedAvg?

Hi @CharlieDinh,

I am sorry for not providing a full performance test for this commit. I have applied it to my current project. In my project, the updated version of Per-FedAvg does not differ much from its previous version in some aspects such as accuracy. I would give a full performance test after finishing my current project.

CharlieDinh commented 3 years ago

Thank @sshpark.