OPTML-Group / Unlearn-Sparse

[NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu
MIT License
62 stars 7 forks source link

Questions about the evaluation of MIA efficacy #1

Closed marsplus closed 11 months ago

marsplus commented 1 year ago

Thanks for making the code public.

I have a question on this line of code: https://github.com/OPTML-Group/Unlearn-Sparse/blob/76a429959507126e900e820eed8c06f45f883fcc/evaluation/SVC_MIA.py#L67

Wouldn't the accuracy be calculated by comparing the predictions with the ground truth? Did I missing something.

ljcc0930 commented 1 year ago

Thanks for making the code public.

I have a question on this line of code:

https://github.com/OPTML-Group/Unlearn-Sparse/blob/76a429959507126e900e820eed8c06f45f883fcc/evaluation/SVC_MIA.py#L67

Wouldn't the accuracy be calculated by comparing the predictions with the ground truth? Did I missing something.

Thanks for the good question! Here we're doing binary classification. Thus, we split the positive (L65-68) and negative (L70-73) classes and get an average to balance the impact of the two classes when they have different data points. https://github.com/OPTML-Group/Unlearn-Sparse/blob/76a429959507126e900e820eed8c06f45f883fcc/evaluation/SVC_MIA.py#L65-L75

In our paper, Appendix C.3, clarifies the implementation of our MIA metric. Here we count the true negatives predicted by our MIA predictor, i.e. how much data in the forgetting set would be classified as a training example. So only the positive one has been activated here.

Please feel free to let me know if you have further questions!