It's a repository to implement some experiments on Machine Learning Security
It aims to implement a data poisoning attack on labels with SVM classifier.
The code under this directory is implementation of 17 S&P paper "Membership Inference Attacks Against Machine Learning Models" by Shokri et al.
It aims to implement Algorithm 1: Data Synthesis
in paper.
It aims to implement Shadow model technique
in paper.
It aims to implement Membership Inference Attack
according to paper.
It aims to implement CIFAR-10 experiment in paper.
It completes the neural network code using Pytorch.
It completes the utils code.
# Please notice: the norm_all_batch_data.npy under the directory Membership_Inference/cifar10/norm_all_batch_data.npy
is too large to upload.
I uploaded it to the link: https://pan.baidu.com/s/1uZaZhVYUiRXi3resfuJoiA
[17 S&P] Membership Inference Attacks Against Machine Learning Models
[15 IJSN Attribute Infenrence Attack] Hacking Smart Machines with Smarter Ones How to Extract Meaningful Data from Machine Learning Classifiers
Membership Inference Attack:
Paper code: https://github.com/csong27/membership-inference
BielStela's experiment implementation: https://github.com/BielStela/membership_inference