pytorch / opacus

Training PyTorch models with differential privacy
https://opacus.ai
Apache License 2.0
1.7k stars 340 forks source link

Example of Privacy Leak on Image datasets #94

Closed kamathhrishi closed 3 years ago

kamathhrishi commented 3 years ago

Hello Team Opacus , I would like to understand if there are any examples anywhere which demonstrates how training with DP could mitigate model Inversion attacks , membership inference and other privacy attacks. I wanted it for RGB image classifiers. Thank you.

alexandresablayrolles commented 3 years ago

Hi,

Differential privacy is an effective defense against membership inference. For example if you have a prior that there is a 50% chance an image is present in the dataset (before looking at the model), then after looking at the model the probability is no more than 50% + epsilon / 4 (see Property 1 in Section 3.4 of [1]).

Of course the actual mitigation might be better than that, but this is at least an upper-bound.

[1] White-box vs Black-box: Bayes Optimal Strategies for Membership Inference, ICML'2019