-
https://arxiv.org/abs/1610.05820
Abstract—We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the ba…
-
Let's try to characterize data pollution - i.e., has this LLM been pretrained on this corpus?
Simple task: pick a random passage - chop into half. Feed first half into LLM, ask it to complete the p…
-
This site is a very useful resource! If you allow me a suggestion to improve it even further, I would invite you to consider creating a new subsection, whose title could be "Attacks against synthetic …
-
Following the README on the [membership inference page](https://github.com/tensorflow/privacy/tree/master/tensorflow_privacy/privacy/membership_inference_attack) yields an error.
I am running Tenso…
-
The membership inference attack may have some errors with image data set.
-
Hello, I've been interested in membership inference attacks against FL. And I found your code on Github. I'd appreciate that if you could give a more detailed description of your attack method, since …
-
In the README.md, the `documentation` link fails:
```
A library for running membership inference attacks (MIA) against machine learning models. Check out the documentation.
```
-
Hello, Dr.song. I read your paper《Membership Inference Attacks Against Machine Learning Models》 the other day. I am very interested in it, but I have two questions about it. First, your attack requir…
-
In comments, we can describe the initial kind of attacks we want to include in the tool.
-
**Describe the bug**
The code in [this folder](https://github.com/world-federation-of-advertisers/cross-media-measurement/tree/e8dbecb4c398ebfa20be2a993ff14cb1362ed768/src/main/kotlin/org/wfanet/meas…