OPTML-Group / Unlearn-Sparse

[NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu
MIT License
64 stars 7 forks source link

MIA-Efficiency #11

Open yasserkhalil93 opened 1 week ago

yasserkhalil93 commented 1 week ago

Hi,

Thank you for this work.

I would like to enquire about MIA-Efficiency computation.

The following snippet

svc_mia_forget_efficacy = SVC_MIA(
    shadow_train=shadow_train_loader,
    shadow_test=test_loader,
    target_train=None,
    target_test=forget_loader,
    model=model,
    transform=transform_test,
)
print("svc_mia_forget_efficacy", svc_mia_forget_efficacy)

results in the following accuracies:

svc_mia_forget_efficacy {'correctness': 0.0028000000000000247, 'conf
idence': 0.018199999999999994, 'entropy': 0.044399999999999995, 'm_e
ntropy': 0.20099999999999996, 'prob': 0.11339999999999995}

How do you compute MIA_efficiency from there?

jinghanjia commented 1 week ago

Thank you for your interest in this work. As noted in the original paper, specifically in the section 'Towards a Full-Stack MU Evaluation,' we reported values of confidence as the confidence-based MIA predictor for evaluation.