Closed marsplus closed 11 months ago
Thanks for making the code public.
I have a question on this line of code:
Wouldn't the accuracy be calculated by comparing the predictions with the ground truth? Did I missing something.
Thanks for the good question! Here we're doing binary classification. Thus, we split the positive (L65-68) and negative (L70-73) classes and get an average to balance the impact of the two classes when they have different data points. https://github.com/OPTML-Group/Unlearn-Sparse/blob/76a429959507126e900e820eed8c06f45f883fcc/evaluation/SVC_MIA.py#L65-L75
In our paper, Appendix C.3, clarifies the implementation of our MIA metric. Here we count the true negatives predicted by our MIA predictor, i.e. how much data in the forgetting set would be classified as a training example. So only the positive one has been activated here.
Please feel free to let me know if you have further questions!
Thanks for making the code public.
I have a question on this line of code: https://github.com/OPTML-Group/Unlearn-Sparse/blob/76a429959507126e900e820eed8c06f45f883fcc/evaluation/SVC_MIA.py#L67
Wouldn't the accuracy be calculated by comparing the predictions with the ground truth? Did I missing something.