Closed haochenuw closed 5 years ago
Let me summarize the problem here for other users. In short, this issue is caused by some smart-but-too-smart tricks we did in invariant_noise_budget(...)
and only affects a fresh encryption or maybe ciphertexts after very few levels of multiplications, if the plain modulus is chosen to be large. We will have a fix for this ready in the next minor release.
Here is a more detailed reasoning. The decryption of a ciphertext is in the form of dec = delta * m + noise
, where delta = floor(q, p)
is the largest integer smaller or equal to q/p
, p
is the chosen plaintext modulus, q
is the chosen ciphertext modulus, and m
is the plaintext polynomial. Noise budget should be calculated as log(q) - log(p) - log(noise)
, where noise
denotes the absolute value of itself. However, in SEAL, we calculate noise budget by computing log(q) - log(dec * p % q)
which is different from the accurate calculation. Since delta
is not exactly q/p
, our calculate will introduce an error term (q % p) * m
. The resulting noise budget is in fact log(q) - log(p) - log(noise - (q % p) * m)
. For a fresh encryption or ciphertexts resulted from very few levels of multiplications, with a large selection of p
, (q % p) * m
has a larger infinity norm than noise
, resulting in a smaller noise budget than accurate calculation. This can be easily fixed by calculating the correct decryption in invariant_noise_budget(...)
.
I'm closing this issue now.
It seems like for poly_modulus_degree equal to 16384 and a 40 bit plaintext modulus which enables batching, the noise budget of a encryption of zero is 340 bits but noise budget of a encryption of random vector with has only 310 bits of noise budget. The code below reproduces this issue.