dpsa4fl / overview

Differential Privacy for Federated Learning with Secure Aggregation
4 stars 0 forks source link

figure out privacy accumulation #29

Open ooovi opened 1 year ago

ooovi commented 1 year ago

we know the privacy guarantees for one training step, we need to infer that of the whole training procedure. we could:

ooovi commented 1 year ago

advanced composition gives us the bound for non stochastic GD: https://programming-dp.com/ch6.html#advanced-composition-for-approximate-differential-privacy

client side subsampling will allow us to get a lower bound like with the accountant, but we can't just use the accountant from the linked paper because in the non federated DP SGD samples once & computes the sample gradient & then clips and noises, while we sample multiple times & compute the sample gradients& sum them (locally), and then clip and noise (on the server)