privacytrustlab / ml_privacy_meter

Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
MIT License
556 stars 99 forks source link

Can differential privacy's protective effect be verified? #108

Open MrLinNing opened 1 year ago

MrLinNing commented 1 year ago

Your work is excellent, providing a great verification tool for security and privacy researchers. I would like to inquire whether your method can be combined with existing differential privacy defense frameworks, such as the Opacus differential privacy framework. Is it possible to create a tutorial to demonstrate how to verify the effectiveness of differential privacy in defending against your MIA attack method? Thank you!

MrLinNing commented 1 year ago

Additionally, there is a puzzling issue in this tutorial. For the CIFAR-10 dataset, although the training accuracy is relatively high, at over 80%, the testing accuracy is quite poor, at less than 50%. This is an overfitting phenomenon, and the model has no practical value. Suppose we want to increase the test accuracy by changing the training structure or hyperparameters (learning rate, batch size), the resulting MIA ROC is almost the same as random guessing. In this case, it seems that the MIA attack becomes meaningless. How should we understand this situation?