privacytrustlab / ml_privacy_meter

Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.
MIT License
588 stars 100 forks source link

Help Wanted: How To Apply This Tool? #18

Closed gongzhimin closed 3 years ago

gongzhimin commented 3 years ago

Hello, ml_privacy_meter looks good. it was well encapsulated. I'm going to apply your tool to evaluate my model. And I have some questions as follow.

  1. I have programmed the model with pytorch, while your tool was based on tensorflow. Can it work in this scenario?
  2. In tutorials' MEADME.md, it wrote: image Do the number 26, 6 fit in all models? Should I change them?
  3. What's the tool's requirement for dataset? Can I use any dataset to train my model?
mihirkhandekar commented 3 years ago

Hello, Following are the responses to your queries.

  1. The tool expects a Keras/TensorFlow model. You could create the model in Keras and copy layer weights from your PyTorch model, or use some conversion library for the same.
  2. The layers/gradient parameter depends on which layer or gradient you want to exploit. The specification of these is given in the documentation - https://github.com/privacytrustlab/ml_privacy_meter#analyzing-a-trained-model
  3. Yes, you can use any dataset while following the format given in the datasets documentation - https://github.com/privacytrustlab/ml_privacy_meter/tree/master/datasets
gongzhimin commented 3 years ago

Thanks for your reply!

gongzhimin commented 3 years ago

Oh, sorry to bother you again. I noticed that you mentioned federated learning. image

It is a new field of ML. Why not write a tutorial about how to apply ml_privacy_meter in the FL setting? I think it will be very popular.