Open wanglun1996 opened 2 years ago
some preliminary results. The epsilon at 40000 rounds is 0.4929065158395696
{"batchsize": 64, "localiters": 1, "lr": 0.1, "gamma": 1.0, "lmb": 0, "checkpoint": 1000, "perround": 6, "rounds": 40000, "data_path": "/mnt/fednewsrec/data", "embedding_path": "/mnt/fednewsrec/wordvec", "output_path": ".", "sweep": null, "metrics_format": "date", "noise_multiplier": 0.3, "clip_l2_norm": 0.1, "delta": 0.001, "quantize_scale": 100000000.0, "bitwidth": 16, "device": 0}
@wanglun1996 - it looks to me like DP is disabled in the current version of simulate.py. Do you have a working version?
@wanglun1996 - it looks to me like DP is disabled in the current version of simulate.py. Do you have a working version?
@simra I just pushed the most recent version with skellam mechanism in lun/gaussian_dp. I will create a PR once I double check the correctness of the accounting mechanisms.
Sounds good. I'm running into some issues with the accountant and memory consumption - some parameterizations require a lot of memory. Maybe in these situations we fall back to simple RDP accounting.
We should add distributed differential privacy to the federated recommender. Distributed differential privacy is strictly better than local differential privacy.
To do so, we need to (1) choose a discrete DP mechanism; (2) discretize the gradients (torch.round) might be useful.
Candidate mechanisms include discrete gaussian, skellam and poisson binomial.