randombit / botan

Cryptography Toolkit
https://botan.randombit.net
BSD 2-Clause "Simplified" License
2.54k stars 562 forks source link

Faster modular inversion #1479

Open randombit opened 6 years ago

randombit commented 6 years ago

At this point modular inversion is one of the major bottlenecks in ECDSA signature generation, at about 30% of the total runtime. Two inversions are required, one for the nonce and the other to convert the point to affine.

Niels Moller sent me an email where he contended that the fastest approach for const time inversion at ECC sizes was using Fermat's little theorem. However in Botan with P-521, using the const time inversion algorithm (that Niels invented) is over twice as fast. So maybe the issue is (also) that our modular exponentiation algorithm is too slow.

I ran some quick checks, OpenSSL seems to take ~210k cycles for const time modular inversion, vs 360k cycles in ct_inverse_mod_odd_modulus. Simply matching that would improve ECDSA signature perf by 15%

randombit commented 6 years ago

WRT using Fermat's little theorem it would be beneficial to use the known addition chains for computing x^(p-2) % p https://briansmith.org/ecc-inversion-addition-chains-01#p256_scalar_inversion has chains for inversions mod order and mod p for P-256 and P-384.

randombit commented 6 years ago

https://eprint.iacr.org/2014/852.pdf has an addition chain for P-521 inversion.

randombit commented 6 years ago

With #1546 and #1547 we have faster field inversions for P-256, P-384, and P-521 all of which improved ECDSA signature performance by ~~ 10-15%. ECDSA verification and ECDH also improved but not as much (% wise) because getting the affine coordinate is less of the total runtime there.

We may also want specialized inversions modulo the curve order. But this doesn't really seem worthwhile because the only algorithm that benefits is ECDSA. It would be better at this point to work on improving the performance of the generic const time modular inversion, which would have a lot of benefits across the codebase.

kriskwiatkowski commented 5 years ago

Have you looked at https://eprint.iacr.org/2019/266.pdf already? It seems to me, there maybe is a chance it can be useful. Any opinion?

randombit commented 5 years ago

@henrydcase I have seen that paper but currently don't understand at all how the algo works. It seems much faster, eg they report for inversion modulo the 511-bit M-511 prime 30K Skylake cycles, while Botan's const time algo takes 200K+ Skylake cycles for a randomly chosen 512-bit prime. And the paper claims "Our advantage is also larger in applications that use “random” primes rather than special primes" [vs Fermat]. So overall certainly promising. Even if we assume a 5x slowdown going from DJB hand coded asm to C++ that's still a decent speedup.

Also our current const time algorithm only works for odd modulus which means the generation of d during RSA keygen is not protected.

That paper also has (Figure 1.2) a fast algorithm for gcd which looks simple to make const time, which would be useful to replace our current "mostly" const time gcd.

mratsim commented 4 years ago

@henrydcase I have seen that paper but currently don't understand at all how the algo works. It seems much faster, eg they report for inversion modulo the 511-bit M-511 prime 30K Skylake cycles, while Botan's const time algo takes 200K+ Skylake cycles for a randomly chosen 512-bit prime. And the paper claims "Our advantage is also larger in applications that use “random” primes rather than special primes" [vs Fermat]. So overall certainly promising. Even if we assume a 5x slowdown going from DJB hand coded asm to C++ that's still a decent speedup.

Also our current const time algorithm only works for odd modulus which means the generation of d during RSA keygen is not protected.

That paper also has (Figure 1.2) a fast algorithm for gcd which looks simple to make const time, which would be useful to replace our current "mostly" const time gcd.

Bernstein-Young algorithm also only works for odd moduli.

Note: I'm not a mathematician and I didn't try to implement it yet because it's still over my head but here is a start of what I understood, the challenges and a naive analysis of what could be the speed of the algorithm.

  1. The algorithm as described in the paper works in ℤ × ℤ2 × ℤ2 → ℤ × ℤ2 × ℤ2
    • with ℤ including the negative number,
    • 2 quotient ring of integer modulo even number. In practice, the denominator is only a power of 2 as we do repeated halving so at the very least this can be represented by a counter of delayed halving.
    • ℤ*2, same as ℤ2 but without zero.
  2. Each step apply a transition matrix that swap/halves/adds/negates
def truncate(f, t):
    if t == 0: return 0
    twot = 1 << (t - 1)
    return ((f + twot) & (2 * twot - 1)) - twot

def divsteps2(n, t, delta, f, g):
    assert t >= n and n >= 0
    f, g = truncate(f, t), truncate(g, t)
    u, v, q, r = 1, 0, 0, 1
    while n > 0:
        f = truncate(f, t)
        if delta > 0 and g & 1:
            delta, f, g, u, v, q, r = -delta, g, -f, q, r, -u, -v
        g0 = g & 1
        delta, g, q, r = 1 + delta, (g + g0 * f) / 2, (q + g0 * u) / 2, (
            r + g0 * v) / 2
        n, t = n - 1, t - 1
        g = truncate(ZZ(g), t)
    M2Q = MatrixSpace(QQ, 2)
    return delta, f, g, M2Q((u, v, q, r))

def iterations(d):
    return (49 * d + 80) // 17 if d < 46 else (49 * d + 57) // 17

def recip2(f, g):
    ## Compute g^-1 mod f: f MUST be odd
    assert f & 1
    d = max(f.nbits(), g.nbits())
    m = iterations(d)
    print(f'm: {m}')
    precomp = Integers(f)((f + 1) / 2) ^ (m - 1)
    print(f'precomp: {precomp}')
    delta, fm, gm, P = divsteps2(m, m + 1, 1, f, g)
    print(f'P[0][1]: {P[0][1]}')
    V = sign(fm) * ZZ(P[0][1] * 2 ^ (m - 1))
    return ZZ(V * precomp)

Implementation challenges

As it stands the paper is hard to naively implement in a cryptographic library:

Analysis

This is my naive analysis, unfortunately I couldn't find an optimized implementation. This is contrasted with the algorithm from Niels Möller that you use. (Note I've corrected the algorithm from paper, it incorrectly initializes v to 0.

Input: integer x, odd integer n, x < n
Output: x−1 (mod n)
1:   function ModInv(x, n)
2:   (a, b, u, v) ← (x, n, 1, 0)
3:   ℓ ← ⌊log2 n⌋ + 1            ⮚ number of bits in n
4:   for i ← 0 to 2ℓ − 1 do
5:     odd ← a & 1
6:     if odd and a ≥ b then
7:       a ← a − b
8:     else if odd and a < b then
9:       (a, b, u, v) ← (b − a, a, v, u)
10:    a ← a >> 1
11:    if odd then u ← u − v
12:    if u < 0 then u ← u + n
13:    if u & 1 then u ← u + n
14:    u ← u >> 1
15:  return v
# 15:  return v

The first thing that jumps to me is that the number of iterations is scaled by 49/17 * number of bits. This is a 2.88 factor while Niels algorithm (which seems to be an adaptation of Stein's binary GCD for constant-time) is a fixed 2 * number of bits

The ZZ(P[0][1] * 2 ^ (m - 1) multiplication seems to be quite expensive if taken literally

Alternative implementations

  1. Did you see the implementation by Aranha in Relic? https://github.com/relic-toolkit/relic/blob/5cdabd57/src/fp/relic_fp_inv.c#L402.

  2. There is also an formally verified implementation in Fiat Crypto at https://github.com/mit-plv/fiat-crypto/pull/670, and here is the code generated for BLS12-381 prime: https://github.com/relic-toolkit/relic/blob/5cdabd57/src/low/x64-fiat-381/bls12_381_q_64.c#L2623 and the wrapper https://github.com/relic-toolkit/relic/blob/5cdabd57/src/low/x64-fiat-381/relic_fp_inv_low.c#L52-L78

Other algorithms

Besides the Bernstein-Young paper, the most recent papers on constant-time inversion are:

Both cost2 * number of bits for the main algorithm but the intermediate steps seem much more complex than Möller's algorithm.

randombit commented 4 years ago

@mratsim Very helpful thank you. I had not seen either of the other implementations you reference.

I had earlier read Bos' paper for const-time Montgomery inversion but it does not seem to me a promising approach as the (not constant-time) implementation of this algorithm in almost_montgomery_inverse/normalized_montgomery_inverse (https://github.com/randombit/botan/blob/master/src/lib/math/numbertheory/mod_inv.cpp#L15) is about (IIRC) 1/2 of the performance of the constant time code, and Bos reports almost an 8x slowdown when moving to constant time (Table 1 page 6). Compared to Moeller, loop iteration count is somewhat less than 2*bits (seems to be about 1.4) but the operation count in each loop is easily twice

mratsim commented 4 years ago

Regarding speed you are probably aware of this but GCC is absolutely horrible at handling multiprecision arithmetic.

For my elliptic curve library, this is the speed I get on field operations with Clang (inversion using Möller's algorithm) on Ethereum/Blockchain related elliptic curves.

0 means that the compiler optimized the operation away (tried some volatile reads/writes but I don't want to slow the bench as well and I'm more interested in multiplication/squaring/inversion anyway)

⚠️ Measurements are approximate and use the CPU nominal clock: Turbo-Boost and overclocking will skew them.
==========================================================================================================

All benchmarks are using constant-time implementations to protect against side-channel attacks.

Compiled with Clang
Running on Intel(R) Core(TM) i9-9980XE CPU @ 3.00GHz
! Overclocked all-core turbo @4.1GHz -> CPU clock is not nominal clock, comparisons only valid ont his CPU.

--------------------------------------------------------------------------------
Addition        Fp[BN254]               0 ns         0 cycles
Substraction    Fp[BN254]               0 ns         0 cycles
Negation        Fp[BN254]               0 ns         0 cycles
Multiplication  Fp[BN254]              22 ns        67 cycles
Squaring        Fp[BN254]              18 ns        55 cycles
Inversion       Fp[BN254]            6254 ns     18763 cycles
--------------------------------------------------------------------------------
Addition        Fp[Secp256k1]           0 ns         0 cycles
Substraction    Fp[Secp256k1]           0 ns         0 cycles
Negation        Fp[Secp256k1]           0 ns         0 cycles
Multiplication  Fp[Secp256k1]          22 ns        68 cycles
Squaring        Fp[Secp256k1]          20 ns        61 cycles
Inversion       Fp[Secp256k1]        6303 ns     18910 cycles
--------------------------------------------------------------------------------
Addition        Fp[BLS12_381]           0 ns         0 cycles
Substraction    Fp[BLS12_381]           0 ns         0 cycles
Negation        Fp[BLS12_381]           0 ns         0 cycles
Multiplication  Fp[BLS12_381]          46 ns       138 cycles
Squaring        Fp[BLS12_381]          39 ns       118 cycles
Inversion       Fp[BLS12_381]       15659 ns     46979 cycles
--------------------------------------------------------------------------------

And GCC

⚠️ Measurements are approximate and use the CPU nominal clock: Turbo-Boost and overclocking will skew them.
==========================================================================================================

All benchmarks are using constant-time implementations to protect against side-channel attacks.

Compiled with GCC
Running on Intel(R) Core(TM) i9-9980XE CPU @ 3.00GHz

--------------------------------------------------------------------------------
Addition        Fp[BN254]               5 ns        17 cycles
Substraction    Fp[BN254]               3 ns        10 cycles
Negation        Fp[BN254]               2 ns         6 cycles
Multiplication  Fp[BN254]              32 ns        98 cycles
Squaring        Fp[BN254]              29 ns        88 cycles
Inversion       Fp[BN254]            7893 ns     23679 cycles
--------------------------------------------------------------------------------
Addition        Fp[Secp256k1]           5 ns        17 cycles
Substraction    Fp[Secp256k1]           3 ns        10 cycles
Negation        Fp[Secp256k1]           2 ns         6 cycles
Multiplication  Fp[Secp256k1]          34 ns       104 cycles
Squaring        Fp[Secp256k1]          33 ns        99 cycles
Inversion       Fp[Secp256k1]        7954 ns     23862 cycles
--------------------------------------------------------------------------------
Addition        Fp[BLS12_381]           9 ns        27 cycles
Substraction    Fp[BLS12_381]           5 ns        16 cycles
Negation        Fp[BLS12_381]           3 ns        10 cycles
Multiplication  Fp[BLS12_381]          62 ns       188 cycles
Squaring        Fp[BLS12_381]          57 ns       172 cycles
Inversion       Fp[BLS12_381]       28863 ns     86591 cycles
--------------------------------------------------------------------------------

For inversion GCC is 2x slower than Clang

This is something that is not unique to my library, I reported the same to the fiat-crypto project: https://github.com/mit-plv/fiat-crypto/issues/691

Carries

In particular it does not handle carries properly (see https://gcc.godbolt.org/z/2h768y) even when using the real addcarry_u64 intrinsics

#include <stdint.h>
#include <x86intrin.h>

void add256(uint64_t a[4], uint64_t b[4]){
  uint8_t carry = 0;
  for (int i = 0; i < 4; ++i)
    carry = _addcarry_u64(carry, a[i], b[i], &a[i]);
}

GCC

add256:
        movq    (%rsi), %rax
        addq    (%rdi), %rax
        setc    %dl
        movq    %rax, (%rdi)
        movq    8(%rdi), %rax
        addb    $-1, %dl
        adcq    8(%rsi), %rax
        setc    %dl
        movq    %rax, 8(%rdi)
        movq    16(%rdi), %rax
        addb    $-1, %dl
        adcq    16(%rsi), %rax
        setc    %dl
        movq    %rax, 16(%rdi)
        movq    24(%rsi), %rax
        addb    $-1, %dl
        adcq    %rax, 24(%rdi)
        ret

Clang

add256:
        movq    (%rsi), %rax
        addq    %rax, (%rdi)
        movq    8(%rsi), %rax
        adcq    %rax, 8(%rdi)
        movq    16(%rsi), %rax
        adcq    %rax, 16(%rdi)
        movq    24(%rsi), %rax
        adcq    %rax, 24(%rdi)
        retq
gmaxwell commented 4 years ago

one for the nonce and the other to convert the point to affine.

When an inverse is extremely expensive compared to a field multiply: One thing to consider is that modular inversions are extremely easy to perfectly blind instead of making constant time. At worst, you simply do a batch inversion with a random value in the batch, though sometimes like for the projection from jacobian to affine you can blind even more efficiently.

It may well be that the cost of blinding including generating a 'random number' (e.g. hash of nonce) for it is greater than the cost of just using a sufficiently good(tm), constant time inversion. But that should probably be the comparison point, especially given that a fast constant time inversion is complicated to implement and validate while blinding is less so.

ECDSA verification and ECDH also improved but not as much (% wise) because getting the affine coordinate is less of the total runtime there.

ECDSA verification can be done without projecting back to affine. Instead you can project the affine R provided by the signature, which doesn't require any inversion. Some care is required to correctly handle the modular reduction of r by the signer.

The only inversion needed in ecdsa validation is the scalar inversion of the incoming s because the standards foolishly don't have the signer do it for you. :)