Closed kanav99 closed 1 week ago
I don't think there is a clearly defined notion of optimal to give a conclusive answer. For example, you could reduce the precision and still get acceptable results depending on the data. Similarly, the optimal number of threads depends on the hardware as well as the computation. What I can say is that I don't see any obvious issues with your approach. Since you are looking for an online-only benchmark, using edaBits is almost certainly faster than not doing so. However, one could argue that a cut-down online-only benchmark would use matrix triples instead of computing matrix multiplications and convolutions from basic triples. As a result, you will probably find that the cost is dominated by the convolution layers, which might not be the case when using matrix triples.
Thanks for the clarifications!
In the program above, I have the line sfix.set_precision(16, 31)
, and I am compiling with command python3 compile.py -R 64
.
Ideally, I want 31-bit fixed-point values with 16-bit precision, and SPDZ2k slack s = 64 (for malicious security), and hence, secret sharing to happen over 128-bit rings (64-bits for fitting intermediate result of fixed-point multiplication and 64-bit for slack). For this, should I have instead used sfix.set_precision(16, 31)
and python3 compile.py -R 128
?
No, the compilation is independent of the SPDZ2k security parameter, so -R 64
is the right choice.
Thanks for the clarification!
I want to perform a benchmark of two party batch inference (not training) of HiNet (based on this code). I want to run it using SPDZ2k. I wanted to make sure if I am using the right code and steps to get the numbers.
This is the code I am using:
I compile it using:
And run it using:
Also, I have commented out this line as I don't need to calculate softmax. The MP-SPDZ code was compiled with -DINSECURE flag.
Would you be able to verify if this this the most optimal way to run this task?
Thanks