mratsim / constantine

Constantine: modular, high-performance, zero-dependency cryptography stack for verifiable computation, proof systems and blockchain protocols.
Other
413 stars 44 forks source link

Implement lazy carries and reductions #15

Closed mratsim closed 4 years ago

mratsim commented 4 years ago

Context

Currently after each addition or substraction steps there is a reduction done if the result is over the field modulus.

Due to constant-time constraints, there is no shortcut if it is unnecessary, the memory accesses are always done.

Instead, at the cost of a couple bits, we could use lazy carries/reductions.

Instead of using 31 bits over 32-bit words or 63-bit over 64-bit word, we use less bits in each words. For example assuming we use 26 bits and 52 bits respectively from 32 or 64-bit words, we only need to reduce every 6 or 12 additions respectively. This is desirable in particular for addition chains

Choosing a default: Maximizing the usage of the word overhead

256-bit curves are quite common:

Or closely 254-bit Barreto-Naehrig for zkSNARKS.

Representing a 256 or 254 bits curve in the most compact way on 64-bit arch requires 4 words. Assuming

52-bit logical words and 12 lazy bits or 51-bit logical words and 13 lazy bits would be the best logical word bitsizes. They both can represent 2^255 integers in 5 words (but a radix 2^52 representation can also represent the numbers 2^255 and 2^256 in 5 words)

Side-note on SIMD

This may also enable opportunities for SIMD vectorization using either integer or floating-point math.

Using floating point for pairings is covered in this paper:

Side-note on "final substraction"-less Montgomery Multiplication/Exponentiation

With well-chosen word-size that allow redundant representations we can avoid final substraction in Montgomery Multiplication and Exponentiation:

Implementation strategy

The implementatons steps would be:

  1. Change the WordBitSize at https://github.com/mratsim/constantine/blob/d83101153aad30b7ae552effcae601b7d1221f56/constantine/config/common.nim#L17-L27 This can be made a {.intdefine.} for compile-time configuration
  2. Ensure the encoding/decoding routine in io_bigints.nim properly deal with more than 1 unused bit
  3. Field elements should now have a countdown that tracks how many potential carries are left given the free bits. If it reaches 0, they should call a normalization/reduction proc.
  4. To be researched: add and sub may have to return a carry Word instead of a CtBool as the carry is not 0 or 1 anymore (but we never add the carry, it is used as an input for a optional reduction so might be unneeded)

References

mratsim commented 4 years ago

Some clarifications

As mentioned in Milagro docs, lazy carry and lazy reduction are separate, independant concepts.

Lazy reductions

Most prime modulus bitsize do not match exactly with a multiple of the logical word size of the library (2^3 or 2^63), for example the BN254 curve or BLS12-381 curve will leave respectively 63*5-254 = 61 and 63*7-381 = 60 unused bits in the last word. This can be used to accumulate up to 60\~61 additions without worrying about overflowing in the physical representation. And we would need to substract up to 60\~61 * P to return to a fully reduced representation.

Not fully reducing

As an optimization we can stay in a unreduced representation by conditionally substracting half that range after that many substractions, taking advantage that it's unlikely that operands are in the same order of magnitude as the prime 2^254 or 2^381.

Fully reducing in logarithmic time

Or we can use a logarithmic approach by conditionally substracting half then a quarter then a eightth then ... of the excess range. Given a word-bit size of 2^31 or 2^63, the excess bits are at most 30 or 62 and so we would need 4\~5 conditional substractions every 30\~62 additions.

As far as I know this is a novel approach that is not mentioned in the literature and was never used in software.

Dealing with substraction

Substraction of a reduced/unreduced operand by an unreduced operand must be handled specially:

Removing the need of final substraction

With lazy reduction we have a redundant representation that can represent 2p 3p 4p ... 8p if there are enough excess bits. This may help Montgomery Multiplication and exponentiation by avoiding the need of the last conditional substraction (Walter, 1999, Hachez & Quisquater, 2000, Bos & Montgomery, 2017 and many hardware papers like Ors-Batina-Prenel-Vandewalle or Walter, 2017)

Note that the Almost Montgomery Multiplication of Gueron that checks the MSB to know if reduction is needed will leak information and we would still need to estimate if we need to remove p, 2p, ..., MaxExcess * p after each substraction

Lazy Carry

Lazy Carries are (almost) independent from Lazy Reductions, here there are excess bits within a machine word. This allows to replace

https://github.com/mratsim/constantine/blob/2aec16d8d8358f5c0d0edc1e3f0ccd758a85efb4/constantine/arithmetic/bigints_raw.nim#L313-L324

by

func add*(a: BigIntViewMut, b: BigIntViewAny) =
  checkMatchingBitlengths(a, b)

  for i in 0 ..< a.numLimbs():
    a[i] += b[i]
    a[i] = a[i].mask()

func modular_add(a: BigIntViewMut, b: BigIntViewAny, m: BigIntViewConst) =
  ## This is pseudocode and would be implemented at the Field-level and not BigInt level
  a.add(b)
  if a.potential_carries == MaxPotentialCarries:
    a.normalizeCarries()

Removing loop-carried dependencies and enabling easy vectorization by the compiler. The only potential limitation is that the compiler cannot infer the loop unrolling, but the function becomes small enough that monomorphization+inlining would probably not increase codesize (separate functions require parameters and stack bookeeping before call and after call and also within the actual function call, this can easily require 10 to 20 instructions before doing anything useful, an inlined multilimb addition + mask on less than 20 limbs wouldn't increase codesize)

They also enable multiplication by constant less than the current word excess without converting them to Montgomery form and using full-blown Montgomery multiplication. This is useful for extension field that uses sqrt(-2) or sqrt(-5) as the solution to irreducible polynomials.

For normalization of lazy carries there is optional coupling with lazy reduction (hence the almost at the beginning of the paragraph) in that all the lazy carries must be accumulated somewhere, either they fit in the extra space of the last word WordBitSize - ModulusBitSize - Word Excess and we can handle this unreduced representation like in the lazy reduction case or we need an extra temporary word.

mratsim commented 4 years ago

Beginning of a proof-of-concept here: https://github.com/mratsim/bignum/commit/2673a043691862dd2cb1c594775260091826488d

Lazy carry

This is very easy to build for addition/substraction/negation as we can ensure that assuming u = modulus.limbs[i] we always have u <= 2^ExcessBits * modulus.limbs[i] in particular for u == 0. With lazy carries all addition/substraction/negation are carryless thanks to the extra space per word.

Full reduction is also straightforward

However with multiplication, limbs multiplication can carry over to the next one, can it also exceed u <= 2^ExcessBits * modulus.limbs[i]? Dealing with those may introduce a lot of complexity.

Lazy reduction

Lazy reduction instead is probably much less error prone to implement since only the high word holds the excess carries. It however is quite restricted as we can only have 2 excess bits for BN254 (4*64=256) or 3 for BLS12-381 (6*64 = 384)

mratsim commented 4 years ago

None of the pairing curves have a special form that make lazy reduction and logical word smaller than the physical word worth it. Scott's paper Slothful reduction or the logarithmic reduction I mentioned makes debugging more complex and just delay the inevitable.

However, in some select cases we can use the technique in

image image to delay reduction, but making that constant time would still require a table lookup that scans all the potential n*p multiple of the field modulus p to select the appropriate one for reduction.

Furthermore, https://github.com/mratsim/constantine/pull/17, adds no-carry Montgomery multiplication and squaring that is usable for most pairing-friendly curves (up to 15% perf improvement). Optimizing multiplications and squarings is probably more important than additions and forcing a reduction check before each is costly. Even if we use a redundant representation with 2 extra bits to allow storing up to 4p to allow for "final substraction-less" modular multiplication that may me paying for the substraction upfront.

mratsim commented 4 years ago

For posterity, Patrick Longa's PhD Thesis dedicates 30 pages to lazy reduction and scheduling techniques to reduce or erase data dependencies

image

Followed by a dedicated section for pairings. Notably we have these substraction formula to canonicalize a lazy reduced field element in case of double-precision arithmetic (i.e. using only 32-bit out of a 64-bit limb and leaving the extra 32 for carries)

image

This is followed by multiplication formula in FP2, Fp6 and Fp12 and squarings in Fp12 that uses the lazy reduction

https://uwspace.uwaterloo.ca/handle/10012/5857

However the space cost is high, especially in constant-time settings where you have to iterate over all the space all the time. Using more space also doesn't play well with CPU caches that are significantly more expensive than a data-dependency. Lastly since that time while the ADC instruction data dependency cost was 6 cycles, recent Intel CPUs only have a data-dependency cost of 2 cycles (but only 1 execution port AFAIK) provided you update a register (and not a memory location) and AMD is even less costly.