Open sekiguchi0731 opened 1 month ago
Hi Hinata,
Thank you for your message! As discussed in Page 8, we cannot compute EAU or L-EAU directly. The approximate L-EAU value (used in our paper) in the CTR dataset can be obtained using maximum test accuracy among all models occurring in the experiment, including multiple variants of LP-1ST, LP-2ST and PATE, as well as multiple choices of epsilon in DP.
Best, Ruihan & Jin
Hi Ruihan & Jin,
Thank you for your answer. I was able to calculate the value of L-EAU. However, I now have another question. While I understand the formal definition of L-EAU, I’m not entirely sure what it represents in a real-world scenario. My interpretation is that L-EAU reflects the accuracy an adversary could achieve based solely on common sense or the available features, without specific knowledge of the model’s training labels. Is this correct?
Best regards, Hinata
Hi Hinata,
Thanks for your interest about our paper!
You are right. L-EAU means that from the (public) features, what's the best guess for the private labels. And privacy leakage due to the model it self (that is trained on the private labels) should be beyond this.
Best, Ruihan & Jin
Hi,
My name is Hinata, and I am a college student studying information science at Ochanomizu University in Japan. I have been reading your paper titled "Does Label Differential Privacy Prevent Label Inference Attacks?" several times, and I am particularly interested in the value of "L-EAU" in the CTR Dataset.
Could you please provide some guidance on how I can obtain this value? Your insights would be greatly appreciated!
Thank you for your time!
Best Regards, Hinata