privacytrustlab / ml_privacy_meter

Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
MIT License
555 stars 99 forks source link

Attack-R result members #120

Open phfaustini opened 1 month ago

phfaustini commented 1 month ago

I want to calculate the Privacy Leakage metric from this Usenix paper, which is simply the difference between the true positive rate (TPR) and the false positive rate (FPR) of the inference attack.

For the Attack-S, it seems straightforward since the result from audit_obj.run()[0] contains members fp and tp. However, for the Attack-R, fp and tp are lists with n+1 elements (n being the number of reference models) sorted in ascending order.

It adds to my confusion that there is a single roc_auc returned, so it's not clear to me how it is computed from the lists of tp and fp, and which values from those lists I should use to calculate the Privacy Leakage metric; can you help?

changhongyan123 commented 1 month ago

@phfaustini

For a fixed attack strategy, you get one TPR (True Positive Rate) and one FPR (False Positive Rate), which gives you one value for the privacy leakage metric.

For the reference attack, the adversary can choose a specific FPR (False Positive Rate) tolerance value. Different choices of this FPR tolerance value correspond to different attack strategies. Each attack strategy, determined by the chosen FPR tolerance value, will result in a specific pair of FPR and TPR (True Positive Rate) values. The paper provides more details about this process in Sections 4 and 5.1.

In our implementation, you can specify a list of FPR tolerance values. For each FPR tolerance value in the list, a different attack strategy is employed, resulting in a corresponding FPR and TPR pair. See the example of fpr_tolerance_list of the tutorial here.

Hope this explanation helps.