Yangfan-Jiang / Federated-Learning-with-Differential-Privacy

Implementation of dp-based federated learning framework using PyTorch
MIT License
274 stars 53 forks source link

How to use global_update_grad()? #2

Closed cherrytora closed 3 years ago

cherrytora commented 3 years ago

Hello : )

I used global_update_grad() because I tried to playback the result in the original paper. I'v already added epsilon and delta in class FLSever and fl_param, but it still ran an error like this image Did I make a mistake or miss something? Thank you !

Yangfan-Jiang commented 3 years ago

Hi, there are some bugs in the code. You can try it again with the latest version I just committed.

Yangfan-Jiang commented 3 years ago

Also, this code is not exactly reproduced according to any research papers, so be careful about some details (e.g., sensitivity compution, grad clipping). You may have to write these parts of the code yourself according to your needs.

cherrytora commented 3 years ago

I see. Thanks for your reply and suggestion very much!