Closed zhaisf closed 9 months ago
Yes, we assume the adversary already knows the distribution but they don't know which split is used in the training set. Please refer to Definition A (Membership inference security game) of Membership Inference Attacks From First Principles.
I see, thank you for your response!
Additionally, do you assume that the training steps / finetuning steps for training the shadow model and the target model is also the same in the shadow experiments?
Yes, in our evaluation, we set all the hyper-parameters to be the same except for the training split.
Got it. Thanks !
Thank you for your inspiring work!
But I have one more question that arises from the closed issue:
Could you please describe how to construct the training data for the "shadow models"? When training a shadow model, do you assume that the adversary already knows the distribution of the dataset? (Or can obtain some of the training data?)