Closed mratsim closed 4 years ago
Some clarifications
As mentioned in Milagro docs, lazy carry and lazy reduction are separate, independant concepts.
Most prime modulus bitsize do not match exactly with a multiple of the logical word size of the library (2^3 or 2^63), for example the BN254 curve or BLS12-381 curve will leave respectively 63*5-254 = 61
and 63*7-381 = 60
unused bits in the last word. This can be used to accumulate up to 60\~61 additions without worrying about overflowing in the physical representation.
And we would need to substract up to 60\~61 * P to return to a fully reduced representation.
As an optimization we can stay in a unreduced representation by conditionally substracting half that range after that many substractions, taking advantage that it's unlikely that operands are in the same order of magnitude as the prime 2^254 or 2^381.
Or we can use a logarithmic approach by conditionally substracting half then a quarter then a eightth then ... of the excess range. Given a word-bit size of 2^31 or 2^63, the excess bits are at most 30 or 62 and so we would need 4\~5 conditional substractions every 30\~62 additions.
As far as I know this is a novel approach that is not mentioned in the literature and was never used in software.
Substraction of a reduced/unreduced operand by an unreduced operand must be handled specially:
4p + a - b
will always be in the correct range when used in curve point addition and point doubling. Note that this may not hold for extension fields.
edit: This is also mentioned in Michael Scott paper: Slothful reduction p6 and p8With lazy reduction we have a redundant representation that can represent 2p 3p 4p ... 8p if there are enough excess bits. This may help Montgomery Multiplication and exponentiation by avoiding the need of the last conditional substraction (Walter, 1999, Hachez & Quisquater, 2000, Bos & Montgomery, 2017 and many hardware papers like Ors-Batina-Prenel-Vandewalle or Walter, 2017)
Note that the Almost Montgomery Multiplication of Gueron that checks the MSB to know if reduction is needed will leak information and we would still need to estimate if we need to remove p, 2p, ..., MaxExcess * p after each substraction
Lazy Carries are (almost) independent from Lazy Reductions, here there are excess bits within a machine word. This allows to replace
by
func add*(a: BigIntViewMut, b: BigIntViewAny) =
checkMatchingBitlengths(a, b)
for i in 0 ..< a.numLimbs():
a[i] += b[i]
a[i] = a[i].mask()
func modular_add(a: BigIntViewMut, b: BigIntViewAny, m: BigIntViewConst) =
## This is pseudocode and would be implemented at the Field-level and not BigInt level
a.add(b)
if a.potential_carries == MaxPotentialCarries:
a.normalizeCarries()
Removing loop-carried dependencies and enabling easy vectorization by the compiler. The only potential limitation is that the compiler cannot infer the loop unrolling, but the function becomes small enough that monomorphization+inlining would probably not increase codesize (separate functions require parameters and stack bookeeping before call and after call and also within the actual function call, this can easily require 10 to 20 instructions before doing anything useful, an inlined multilimb addition + mask on less than 20 limbs wouldn't increase codesize)
They also enable multiplication by constant less than the current word excess without converting them to Montgomery form and using full-blown Montgomery multiplication. This is useful for extension field that uses sqrt(-2)
or sqrt(-5)
as the solution to irreducible polynomials.
For normalization of lazy carries there is optional coupling with lazy reduction (hence the almost at the beginning of the paragraph) in that all the lazy carries must be accumulated somewhere, either they fit in the extra space of the last word WordBitSize - ModulusBitSize - Word Excess
and we can handle this unreduced representation like in the lazy reduction case or we need an extra temporary word.
Beginning of a proof-of-concept here: https://github.com/mratsim/bignum/commit/2673a043691862dd2cb1c594775260091826488d
This is very easy to build for addition/substraction/negation as we can ensure that assuming u = modulus.limbs[i]
we always have u <= 2^ExcessBits * modulus.limbs[i]
in particular for u == 0
. With lazy carries all addition/substraction/negation are carryless thanks to the extra space per word.
Full reduction is also straightforward
However with multiplication, limbs multiplication can carry over to the next one, can it also exceed u <= 2^ExcessBits * modulus.limbs[i]
?
Dealing with those may introduce a lot of complexity.
Lazy reduction instead is probably much less error prone to implement since only the high word holds the excess carries. It however is quite restricted as we can only have 2 excess bits for BN254 (4*64=256) or 3 for BLS12-381 (6*64 = 384)
None of the pairing curves have a special form that make lazy reduction and logical word smaller than the physical word worth it. Scott's paper Slothful reduction or the logarithmic reduction I mentioned makes debugging more complex and just delay the inevitable.
However, in some select cases we can use the technique in
to delay reduction, but making that constant time would still require a table lookup that scans all the potential n*p multiple of the field modulus p to select the appropriate one for reduction.
Furthermore, https://github.com/mratsim/constantine/pull/17, adds no-carry Montgomery multiplication and squaring that is usable for most pairing-friendly curves (up to 15% perf improvement). Optimizing multiplications and squarings is probably more important than additions and forcing a reduction check before each is costly. Even if we use a redundant representation with 2 extra bits to allow storing up to 4p to allow for "final substraction-less" modular multiplication that may me paying for the substraction upfront.
For posterity, Patrick Longa's PhD Thesis dedicates 30 pages to lazy reduction and scheduling techniques to reduce or erase data dependencies
Followed by a dedicated section for pairings. Notably we have these substraction formula to canonicalize a lazy reduced field element in case of double-precision arithmetic (i.e. using only 32-bit out of a 64-bit limb and leaving the extra 32 for carries)
This is followed by multiplication formula in FP2, Fp6 and Fp12 and squarings in Fp12 that uses the lazy reduction
https://uwspace.uwaterloo.ca/handle/10012/5857
However the space cost is high, especially in constant-time settings where you have to iterate over all the space all the time. Using more space also doesn't play well with CPU caches that are significantly more expensive than a data-dependency. Lastly since that time while the ADC instruction data dependency cost was 6 cycles, recent Intel CPUs only have a data-dependency cost of 2 cycles (but only 1 execution port AFAIK) provided you update a register (and not a memory location) and AMD is even less costly.
Context
Currently after each addition or substraction steps there is a reduction done if the result is over the field modulus.
Due to constant-time constraints, there is no shortcut if it is unnecessary, the memory accesses are always done.
Instead, at the cost of a couple bits, we could use lazy carries/reductions.
Instead of using 31 bits over 32-bit words or 63-bit over 64-bit word, we use less bits in each words. For example assuming we use 26 bits and 52 bits respectively from 32 or 64-bit words, we only need to reduce every 6 or 12 additions respectively. This is desirable in particular for addition chains
Choosing a default: Maximizing the usage of the word overhead
256-bit curves are quite common:
Or closely 254-bit Barreto-Naehrig for zkSNARKS.
Representing a 256 or 254 bits curve in the most compact way on 64-bit arch requires 4 words. Assuming
52-bit logical words and 12 lazy bits or 51-bit logical words and 13 lazy bits would be the best logical word bitsizes. They both can represent 2^255 integers in 5 words (but a radix 2^52 representation can also represent the numbers 2^255 and 2^256 in 5 words)
Side-note on SIMD
This may also enable opportunities for SIMD vectorization using either integer or floating-point math.
VPMADD52LUQ
andVPMADD52HUQ
Using floating point for pairings is covered in this paper:
New software speed records for cryptographic pairings\ Michael Naehrig, Ruben Niederhagen, and Peter Schwabe\ https://cryptojedi.org/papers/dclxvi-20100714.pdf
Pairings on elliptic curves -- parameter selection and efficient computation\ Michael Naehrig\ https://cryptosith.org/michael/data/talks/2010-10-19-ECC.pdf
Side-note on "final substraction"-less Montgomery Multiplication/Exponentiation
With well-chosen word-size that allow redundant representations we can avoid final substraction in Montgomery Multiplication and Exponentiation:
Implementation strategy
The implementatons steps would be:
add
andsub
may have to return a carry Word instead of a CtBool as the carry is not 0 or 1 anymore (but we never add the carry, it is used as an input for a optional reduction so might be unneeded)References
Incomplete Reduction in Modular Arithmetic\ T. Yanik, E. Savaş, and C. K.Koç, 2002\ http://pdfs.semanticscholar.org/6f63/a8ae9a43777c710be44dfd7ee5b4c0f3defc.pdf
Curve25519: new Diffie-Hellman speed records\ Daniel J. Bernstein, 2006\ https://cr.yp.to/ecdh/curve25519-20060209.pdf
Faster Explicit Formulas for Computing Pairings over Ordinary Curves\ Diego F. Aranha, Koray Karabina, Patrick Longa,Catherine H. Gebotys, Julio López, 2011\ https://www.iacr.org/archive/eurocrypt2011/66320047/66320047.pdf
Elliptic Curve Cryptography at High Speeds\ Patrick Longa, 2011\ http://ecc2011.loria.fr/slides/longa.pdf
Efficient Implementation of Bilinear Pairings on ARM Processors\ Gurleen Grewal, Reza Azarderakhsh, Patrick Longa, Shi Hu, and David Jao, 2012\ https://eprint.iacr.org/2012/408.pdf
Efficient implementation of finite-field arithmetic\ Peter Schwabe\ https://cryptojedi.org/peter/data/pairing-20131122.pdf
Faster Compact Diffie-Hellman: Endomorphisms on the x-line\ Craig Costello, Huseyin Hisil, Benjamin Smith, 2014\ https://hal.inria.fr/hal-00932952/document
Software and Hardware Implementation of Elliptic Curve Cryptography\ Jérémie Detrey, 2015\ https://www.math.u-bordeaux.fr/~aenge/ecc2015/documents/detrey.pdf
Fast Software Implementations of Bilinear Pairings\ Reza Azarderakhsh, Dieter Fishbein, Gurleen Grewal, Shi Hu, David Jao, Patrick Longa, and Rajeev Verma, 2016\ http://cacr.uwaterloo.ca/techreports/2016/cacr2016-03.pdf
Slothful reduction\ Michael Scott, 2017\ https://eprint.iacr.org/2017/437