AhmedSalem2 / ML-Leaks

Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"
80 stars 19 forks source link

Anyone managed to reproduce the results ? #7

Open AlbinSou opened 1 year ago

AlbinSou commented 1 year ago

I tried to reimplement the results in pytorch a bit like what's done in this: https://github.com/GeorgeTzannetos/ml-leaks-pytorch repo. However both of our implementations get a maximum of 55% precision for the attack (Attacker 1) on cifar-10. Did anyone else try to reproduce the paper and have similar experience ?

akshatmalik commented 1 year ago

Were you able to find the data for the paper? How and where it is organised? I am unable to wrap my head around it.

AlbinSou commented 1 year ago

Were you able to find the data for the paper? How and where it is organised? I am unable to wrap my head around it.

I tried to reproduce the restuls on CIFAR10 using pytorch and torchvision. You can get this dataset from torchvision.

anacatarina3 commented 10 months ago

I also tried to reproduce the results and got the same maximum of 0.55 of precision.

HiEileen commented 10 months ago

I am wondering if anybody has reproduced the defense model proposed in this paper... Actually, other papers mainly suggest that a higher pruning rate(dropout rate, in this paper) can expose more severe privacy leakage rather than their positive conclusion in the paper.

JiePKU commented 8 months ago

I tried to reimplement the results in pytorch a bit like what's done in this: https://github.com/GeorgeTzannetos/ml-leaks-pytorch repo. However both of our implementations get a maximum of 55% precision for the attack (Attacker 1) on cifar-10. Did anyone else try to reproduce the paper and have similar experience ?

Hi, guys @AlbinSou @anacatarina3 @HiEileen @akshatmalik , I have implemented the code that produces around 70% acc and 73% precision for cifar10. The code is here https://github.com/JiePKU/ML-Leaks. If it is helpful, give me a star. Thanks