Closed ghost closed 3 years ago
If you use for
loops, replace them with @for_range_opt
: https://mp-spdz.readthedocs.io/en/latest/Compiler.html#Compiler.library.for_range_opt
Using pypy
also speeds up compilation slightly.
Hm, that seemed to make it slower. Can compilation be run in parallel without deadlocks or performance impact on different sets of code? (e.g: I have multiple layers, and I run ./compile.py -B 32 code1.mpc
./compile.py -B 32 code2.mpc
... ./compile.py -B 32 coden.mpc
in parallel )
That isn't really supported by the compiler. If using @for_range_opt
, you can try to the reduce the optimization budget with -b
. However, in the end the compiler isn't optimized for speed, unlike the virtual machines.
That isn't really supported by the compiler.
Does this mean there may be issues if multiple ./compile.py
are run at the same time? (it does seem to work so far)
Of course you can do that, but there's no easy way of combining the execution.
Got it. I think I can make this work. Thanks a lot for your assistance!
Hi, I'm compiling yao garbled circuits with inputs on the order of 8000-16000 inputs and I'm finding the compilation to be extremely slow (takes more than a minute to run). Is there any way to speed up compilation? For reference, I'm performing sums and comparisons on the 8000-16000 dimensional arrays.