junyechen1996 / draft-chen-cfrg-vdaf-pine

VDAF to support aggregating real number vectors with L2-norm bound
Other
5 stars 0 forks source link

Explain why bounded L2 norm is important for federated learning #22

Open cjpatton opened 1 year ago

cjpatton commented 1 year ago
          Good point. Most papers that I read simply say it (L2-norm) is a popular way to bound the contribution from a client, I don't have a good citation for it yet, still will leave this item open for now until I have a good source to cite. I think intuitively L2-norm bound is a good way of limiting client contribution across all coordinates, i.e. the exact distribution across coordinates is unknown, some are higher, some are lower. Having a range check at each coordinate is not as flexible, and if we try to allow flexibility such as allowing each coordinate to be bounded by 1.0, then the overall L2 norm of the vector can be as large as sqrt(d), so a Client can still send a vector that has much larger influence than other clients.

Having a bounded L2-norm also allows federated learning applications to apply DP noise (e.g. Gaussian) based on the established norm bound, so Client first clips its vector with l2_norm_bound, then applies noise.

_Originally posted by @junyechen1996 in https://github.com/junyechen1996/draft-chen-cfrg-vdaf-pine/pull/18#discussion_r1301379275_

cjpatton commented 8 months ago

@junyechen1996 I still think a bit more context here would be helpful. Maybe a short paragraph in the intro?