Closed pneuschwander closed 9 months ago
We already do contribution bounding (bounding of the domain per user) in the Bounded*
aggregations. The method in the IBM DP Lib is mostly equivalent to the methods in LaplaceNoise
. However, we provide a slightly different interface that already limits the contribution of the users. For instance, LongBoundedSum
already does clamping in addEntry
to bound the contribution per partition and automatically scales the noise that is added before releasing the result via computeResult
.
I don't see how the output of the rejection sampling would be differentially private, since it seems to consume unlimited privacy budget.
What is your application and what aggregation do you want to use for your data?
The methods in Bounded* most likely don’t fit our application. As an overview, we wish to implement local differential privacy in a system where the aggregations by the server are not publicly known.
As we have a separation between applying differential privacy and aggregating the data, we can not use the finished Bounded*-methods. The clamping at the input and the contribution limits don’t address our main problem: We work with heart rate variability (hrv) data which is measured in a continuous stream of timespans from one heartbeat to another in milliseconds (e.g. 600, 620, 615, …). Our problem is that if we apply differential privacy in its original form, it might also be possible to get a timespan of 0 seconds or even a negative timespan. But those are no valid hrv values. Thus to make sure that the output values are all valid hrv values, we wanted to use a bounded (output) domain variant of differential privacy as described in the paper we linked above.
Can this also be achieved with the methods of this library?
The library is focusing on central DP and I'm not aware of any clients using it for local DP. Let me ask some folks working with local DP if it can be easily reused for your local DP application.
Hi,
I think I do not have sufficient context here to understand where local DP comes in here.
But to get your values into an appropriate range: Why can`t you just clamp the output values? I.e. if
final int value = Math.max(Math.min(x, upper), lower);
int noisedValue;
noisedValue = addNoise(value, l1Sensitivity, epsilon);
return Math.max(max_val, Math.min(noisedValue, min_val));
As this is a post-processing step, this is fully DP.
Closing this for now. Feel free to re-open :)
Hello, I'm currently using the
LaplaceNoise
'saddNoise
method of the Java library (version 2.1.0) to add noise to numeric values. https://github.com/google/differential-privacy/blob/6a566908ce7c03f8ba478d39ddd45a0ff15cbb60/java/main/com/google/privacy/differentialprivacy/LaplaceNoise.java#L126I need to limit / bound the domain / range of the resulting values.
A simple solution would be to do rejection sampling:
But this does not seem to be ideal in regard to DP.
There is an implementation for a "Bounded Laplace Mechanism" in IBM's differential-privacy python library that seems to be more suitable for my requirements: https://github.com/IBM/differential-privacy-library/blob/0e6cea92d8739a5658f6f38817219d9461b35c3f/diffprivlib/mechanisms/laplace.py#L282
They refer to the following publication:
Holohan, Naoise, Spiros Antonatos, Stefano Braghin, and Pól Mac Aonghusa. 2019. “The Bounded Laplace Mechanism in Differential Privacy”. Journal of Privacy and Confidentiality 10 (1). https://doi.org/10.29012/jpc.715
I'd like to use that Bounded Laplace Mechanism in Java.
Could you please consider enhancing the Java library to support Bounded Laplace Mechanism similar to the implementation in IBM's python library?