privML / privacy-evaluator

The privML Privacy Evaluator is a tool that assesses ML model's levels of privacy by running different attacks on it.
MIT License
17 stars 17 forks source link

table in readme to solve confusion between target and atack model with purpose and data they have seen #191

Closed Friedrich-Mueller closed 3 years ago

Friedrich-Mueller commented 3 years ago

also what training arguments the users/functions have access to

marisanest commented 3 years ago

As far as I understood the task, the requested tables in the README could look like the following. What do you think @Friedrich-Mueller? For the MembershipInferenceBlackBoxRuleBasedAttack and MembershipInferenceAttackOnPointBasis I am not completely sure if this is correct but both attacks do not have an attack_model (correct me if I am wrong) and thus the attack_models do not have seen any data. Of course, we would need to explain the table a bit more.

For the MembershipInferenceBlackBoxAttack:

training data test data
target_model yes no
attack_model yes yes

For the MembershipInferenceBlackBoxRuleBasedAttack:

training data test data
target_model yes no
attack_model no no

For the MembershipInferenceLabelOnlyDecisionBoundaryAttack:

training data test data
target_model yes no
attack_model yes yes

For the MembershipInferenceAttackOnPointBasis:

training data test data
target_model yes no
attack_model no no