bargavj / EvaluatingDPML

This project's goal is to evaluate the privacy leakage of differentially private machine learning models.
MIT License
128 stars 49 forks source link

Yeom Colluding adversary #18

Open TTitcombe opened 4 years ago

TTitcombe commented 4 years ago

have I understood it correctly that you only implement Adversaries 1 and 2 of Yeom et al. (in yeom_membership_inference)? If so, was there a technical reason the colluding adversary (adversary 3) was not included in your analysis?

evansuva commented 4 years ago

Do you mean the colluding adversary? This requires a very different, and much stronger, threat model where the adversary controls the data owner's training process. It is an interesting attack, but in practice, an adversary who can control the training algorithm can do a lot worse harm in most cases than just enabling inference attacks.

TTitcombe commented 4 years ago

Yes, I agree that it would be an unlikely attack in practice. Primarily it's interesting because it demonstrates that overfitting isn't strictly required (the attack works on MNIST), so I imagine it would exhibit a different accuracy loss/membership advantage against epsilon relationship