Closed SambyCAT closed 1 year ago
Hi, thanks for asking.
In our paper, we aim to understand the unintended privacy leakage caused by machine unlearning; thus, the classical MIA does not have access to the unlearned model.
For both classical MIA and our attack, we only discriminate the <in, out>/<out, out> case by default. We study the privacy leakage of <in, in> case in Section 6.4.
And we use the shadow model paradigm for both membership inference attacks. Because the classical MIA only accesses the original model's output, so two attacks only differ in the feature dimension.
I hope this can solve your confusion. Free to comment if you have more questions.
Thank you! I think my problems are solved. And here i'm curious about one more thing: what's the concrete evaluation under multiple intermediate model scenario? Are you just removing as many samples as intermediate models are?
Thank you! I think my problems are solved. And here i'm curious about one more thing: what's the concrete evaluation under multiple intermediate model scenario? Are you just removing as many samples as intermediate models are?
Glad to hear that your problems are solved.
For the multiple intermediate model scenario. Yes, your understanding is right. For instance, if there are 10 intermediate models, we repeat random deleting one sample and retraining the model
ten times.
Hi, i've checked the paper and the code thoroughly, and i cannot exactly confirm which is the classical MIA method your guys used for comparison. Could u plz provide more details about:
I'm looking forward to your reply. Thanks!