privML / privacy-evaluator

The privML Privacy Evaluator is a tool that assesses ML model's levels of privacy by running different attacks on it.
MIT License
17 stars 17 forks source link

Restructure the privacy risk score to be membership inference attack on a point basis #151

Closed blauertee closed 3 years ago

blauertee commented 3 years ago

Currently the privacy risk score calculation is done by running an implicit MIA when calling the compute_privacy_risk_score function. It should be clear that this actually is an actual MIA and not just some metric computation.

blauertee commented 3 years ago

Hey @fraboeni, does this task just mean, that we should change the structure, so that the privacy risk score calculation is a class called MembershipInferenceAttackOnPointBasis and can be found under attacks?

like this:

attack = MembershipInferenceAttackOnPointBasis(
        target_model,
        x_train,
        y_train,
        x_test,
        y_test,
    )
mem_probs_per_datapoint = attack.attack()

Or should the privacy risk score calculation be something that can be run on top of any MIA we already implemented? So that users can choose from which MIA they'd like to compute the privacy risk score?

like this:

attack = MembershipInferenceBlackBoxRuleBasedAttack(
        target_model,
        x_train,
        y_train,
        x_test,
        y_test
attack_result = attack.attack()
mem_probs_per_datapoint = compute_privacy_risk_score(attack_result)
fraboeni commented 3 years ago

@blauertee Thanks for pinging me! The first solution with the class sounds reasonable to me.