Yangfan-Jiang / Federated-Learning-with-Differential-Privacy

Implementation of dp-based federated learning framework using PyTorch
MIT License
281 stars 55 forks source link

improve training accuracy #6

Closed xdroc closed 2 years ago

xdroc commented 2 years ago

First, during the local update process, batch_grads will reset to 0. This needs to be done through the copy of batch_grads (i.e. batch_grads) when the assignment goes back. Then, gradient updates seem to make more sense in Batch.

Yangfan-Jiang commented 2 years ago

Can you please confirm that your codes follow DP-SGD algorithm exactly? Thanks.

I think the local update should be performed at the end of the outer loop "e in E" rather than the inner loop. Otherwise we need to set the number of updates T in a different way for computing DP noise.

Note that _sample_dataloader represents for the samples in the "Lot".