Open fhoering opened 4 months ago
Hi @fhoering, we are working on the monitoring documentation which will provide information relevant to your questions. Will provide the link here once it is published. Thanks!
In addition to more documentation I think some real benchmark or writeup on usability would be interesting. As the budget depends on sensitivity and is split across all metrics it is difficult to see if the noise level becomes significant or not. It really depends on the number of metrics, the received queries and the budget reset period.
Hi @fhoering , we have updated our playbook to add more details about server monitoring. The updated doc and its references should help address your questions. Please let us know if you have additional questions or feedback.
Regarding custom metrics initiated from UDFs, we are planning to support this feature and currently working on the design. We will share more details once this feature is ready to use.
The monitoring metrics part is described here
It mentions that the metrics are noised with DP but not how the privacy budget is allocated. It seems like the noising scheme is applying global DP with an epsilon of 5 (see here)
Can you give more information on how the budget is allocated across metrics, how it is reset and how the noise is added (gaussian vs laplace) ?
Did you make some tests on how the current implementation affects the metrics usability (number of contributions & number of metrics vs budget & max received queries) ?
How has the decision between noising and not noising metrics been taken ? What about Memory and CPU metrics for example ?
What about custom metrics initiated from the UDFs ?