csiro-mlai / deep-mpc

35 stars 10 forks source link

A question on mixed-circuit computation in "Secure Quantized Training for Deep Learning" #13

Open rdeviti opened 1 month ago

rdeviti commented 1 month ago

Hi Marcel, hope you are doing well. I have a question about the 3PC protocol in appendix A of "Secure Quantized Training for Deep Learning", which leverages mixed-circuit computation. From my understanding of the paper, whenever an ML building block (appendix C) requires comparisons, shifting and truncation (appendix A), the protocol runs domain conversion from arithmetic to binary, executes the operation, switches back to arithmetic, and proceeds. Is this correct? Does the protocol make this decision based on input parameters (e.g., the number of comparisons), or do you empirically know that it is always best to switch back and forth? Thank you!

mkskeller commented 1 month ago

This is correct in principle, but usually it's not the most efficient approach to plainly convert the secret value from one domain to another. See Figures 9-12 in the following paper for examples: https://eprint.iacr.org/2020/338

The selection is made empirically. For the three-party protocol in Appendix A, the cost of bit operations is so much lower than arithmetic operations that I don't have any doubts that mixed circuits perform better. Tables 7 and 8 in the paper above support this view in that the cost is considerably lower in many cases. And this is without using the local conversion used in the mentioned protocol.

rdeviti commented 1 month ago

Thank you for the link! Ok, so Tables 7 and 8 suggest mixed-circuit computation is better for honest-majority protocols in 2^k in both the malicious and semi-honest setting. That's really useful. Could you please point me to the Cpp logic in MP-SPDZ that changes domains before/after executing, e.g., a comparison, for one of these protocols? Thank you again!