Closed blauertee closed 3 years ago
Hey @fraboeni, does this task just mean, that we should change the structure, so that the privacy risk score calculation is a class called MembershipInferenceAttackOnPointBasis
and can be found under attacks
?
like this:
attack = MembershipInferenceAttackOnPointBasis(
target_model,
x_train,
y_train,
x_test,
y_test,
)
mem_probs_per_datapoint = attack.attack()
Or should the privacy risk score calculation be something that can be run on top of any MIA we already implemented? So that users can choose from which MIA they'd like to compute the privacy risk score?
like this:
attack = MembershipInferenceBlackBoxRuleBasedAttack(
target_model,
x_train,
y_train,
x_test,
y_test
attack_result = attack.attack()
mem_probs_per_datapoint = compute_privacy_risk_score(attack_result)
@blauertee Thanks for pinging me! The first solution with the class sounds reasonable to me.
Currently the privacy risk score calculation is done by running an implicit MIA when calling the
compute_privacy_risk_score
function. It should be clear that this actually is an actual MIA and not just some metric computation.