yonsei-sslab / MIA

🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"
MIT License
27 stars 8 forks source link

Questions on the training dataset of the attack model #2

Open Toby-Kang opened 11 months ago

Toby-Kang commented 11 months ago

Hi! I have a question about the way that training dataset of the attack model is formulated.

In the original paper, the dataset consists of three parts: label of the data, prediction vector, and whether the data is in the original training dataset.

However, in you implementation, the dataset consists of two parts: top k probabilities, and whether the data is in the original training dataset.

I wonder if this modification would lead to difference in the way that MIA works. I'm new to MIA, so I would appreciate it if you can help.

snoop2head commented 11 months ago

Hi Toby, thank you for your interest in the implementation.

Purpose of providing top-k probabilities only

Most of the available black box model's APIs only provide top-k probabilities with its corresponding labels, for example Google Vision AI outputs top-10 probabilities, not the entire prediction vector.

The paper also uses top-k filter mentioning that

Restrict the prediction vector to top k classes. When the
number of classes is large, many classes may have very small
probabilities in the model’s prediction vector. The model will
still be useful if it only outputs the probabilities of the most
likely k classes. To implement this, we add a filter to the last
layer of the model. The smaller k is, the less information the
model leaks. In the extreme case, the model returns only the
label of the most likely class without reporting its probability.

I think using top-k probabilities is hard scenario for the attacker and not deviating from the MIA's principle.

Purpose behind Removing Label of the Data

I removed the 'class label' column from the Attack Training Set because (1) it didn't help the in/out binary classification (2) limits the possible scenarios of the attack: most of the available APIs don't provide the integer of class labels in the training set.

Thank you!