Open AlexanderViand opened 3 years ago
Ideally, we would also benchmark Cingulata's built-in implementation, but that might not be feasible. The same goes for Alchemy.
Python script for plotting added in 792b09c18a3489a8ccc20ff938c91b952caa972e.
Please use the following column names for the CSV of the measurements:
(✔︎): Parameter configured in current benchmark implementation.
Parameter | Value | SEAL | PALISADE | HElib |
---|---|---|---|---|
Plaintext modulus (t/p) | 2^64 | |||
Security level (λ) | 128 bit | ✔︎ | ||
Cyclotomic Order (n) | 2 | |||
Ctxt coefficient modulus (q) | ||||
RLWE error distribution (ꭓ) | 3.2 | ✔︎ |
TODO
TODO
@AlexanderViand It looks like PALISADE does not implement TFHE but FHEW only, although in their README they list both Ducas-Micciancio (FHEW) and Chillotti-Gama-Georgieva-Izabachene (TFHE) schemes for Boolean circuit evaluation. Should I just use FHEW instead?
Oh, that's interesting! But sure, let's compare it anyway - might be interesting to see if there's a significant difference to TFHE
Benchmarking the libraries against each other:
Micro-benchmarks
Libraries to evaluate
BFV/BGV
CKKS
CGGI
Parameter settings to evaluate
We should fix two plaintext space sizes, e.g. 1-bit (binary) and 64-bit and use that for all benchmarks, just so that we don't blow up the space of possibilities too much. For levelled schemes, we should evaluate with 1-mult params and with e.g. 10 levels.
For binary-only libraries like TFHE, the 64-bit version then uses an adder/mult circuit. In that case, Ctxt-Ptxt optimizations are possible but not implemente which is fine, since this isn't the main core of the paper anyway. For fair comparison among implementations of the same scheme, we should hardcode the parameters to be equal everywhere.
Finally, we could consider implementing the benchmark applications (NN, cardio, Chi-Squared) for each library, but this out of scope for the current timeframe.