sigstore / rekor

Software Supply Chain Transparency Log
https://sigstore.dev
Apache License 2.0
879 stars 161 forks source link

Quantum-computing-proofing Rekor #932

Open trishankatdatadog opened 2 years ago

trishankatdatadog commented 2 years ago

Description

Now that NIST has announced the first few quantum-resistant cryptographic algorithms, it might be useful to start discussing whether it's worthwhile to quantum-proof Rekor (if it isn't already).

Cc @SantiagoTorres, @mnm678, and @znewman01 who might be especially interested in making this work and scale in practice.

znewman01 commented 2 years ago

CC @haydentherapper; see also https://blog.sigstore.dev/is-sigstore-ready-for-a-post-quantum-world-82c9166985af

Here's my claim:

  1. It's premature to cut over until the finalists are standardized, and there's good support in libraries that are audited, idiomatic, etc.
  2. If we assume that there are no quantum computers capable of forging signatures today, but there will be in 5 years, it's actually not that important to cut over before, say, 4 years from now because we're planning to shard temporally (year-by-year) anyway. It won't be possible to stick a new entry in the Rekor from last year, because its SHA2 (which is quantum-resistant) is fixed. You could forge a signature on a new tree head using the Rekor(last year) key if thats using an old algorithm, but we probably want to have Rekor(this year) sign off on all the previous tree heads.

So I think we can definitely wait until the standardization process is over.

Is there value in preparing before then? Perhaps: we should be prepared for a break in any of the core primitives used in Sigstore. However, there's a real cost to cryptographic agility in terms of complexity (which often becomes a security cost).

haydentherapper commented 2 years ago

If all that needs to change is the signing algorithm, I don't think there's much to discuss. Once there is a finalized suite and there's good, audited, battle-hardened libraries, and we begin to see the industry shift towards these algorithms, we can switch over. I also assume GCP KMS will support quantum keys in the near future (AWS started to awhile ago, but that was long before the NIST announcement, I don't recall which algorithms they picked).

znewman01 commented 2 years ago

FWIW the NSA is now (as of today) recommending all software signatures use quantum-resistant algorithms. Specifically:

NSA’s current view on timing is as follows:

  • Software- and firmware-signing: begin transitioning immediately, support and prefer CNSA 2.0 by 2025, and exclusively use CNSA 2.0 by 2030.

What can developers and programs do to prepare for a future quantum resistant algorithm suite?

A: AES-256, SHA-384, SHA-512, and the NIST hash-based signatures listed in NIST SP 800-208 are considered safe against attack by a large quantum computer. Developers should deploy these algorithms today. They should also begin implementing the other quantum-resistant algorithms NIST and NSA chose and provide feedback about any issues they discover.

CSNA 2.0 here includes:

In particular, NIST SP 800-208 refers to the latter two.

Maybe we could do the following to prepare:

TBH this feels like jumping the gun to me, it's going to cause chaos to tell people "you gotta switch in 3 years but there aren't any approved implementations yet." I still stand by my prior statement:

it's actually not that important to cut over before, say, 4 years from now because we're planning to shard temporally (year-by-year) anyway. It won't be possible to stick a new entry in the Rekor from last year, because its SHA2 (which is quantum-resistant) is fixed.

That statement doesn't apply to individual signatures inside Rekor, however. We should enable folks who want PQ to do that there ASAP.

EDIT: to be clear, the NSA recs are for "national security systems," not average software. But still an interesting data point.

haydentherapper commented 2 years ago

I'm not very familiar with these schemes, but from what I understand, there are significant tradeoffs to each, and none are a perfect candidate. (Edit: I know one significant tradeoff is around the speed of verification, so I want to be hesitant throwing in unvetted libraries that handle verification and opening Rekor up to denial of service)

I agree that with our sharded design, we should be able to react quickly and don't need to frontload this work.

Rather than beginning to support these algorithms, I think we should make sure we're designing our APIs such that we can add support without making breaking API changes or breaking clients. For example, SHA256 for merkle tree digests is hardcoded all over the place.

trishankatdatadog commented 2 years ago

Wow, talk about breaking news. Yes, I agree with Hayden that we should provide "crypto agility" despite the unfortunate reputation it suffers from among certain security circles (a subject on which I plan to write about soon).

znewman01 commented 2 years ago

I'm not very familiar with these schemes, but from what I understand, there are significant tradeoffs to each, and none are a perfect candidate.

That's putting it nicely :) key sizes are huge, signing is slow, verification is slow.

(Edit: I know one significant tradeoff is around the speed of verification, so I want to be hesitant throwing in unvetted libraries that handle verification and opening Rekor up to denial of service) [...] Rather than beginning to support these algorithms, I think we should make sure we're designing our APIs such that we can add support without making breaking API changes or breaking clients. For example, SHA256 for merkle tree digests is hardcoded all over the place.

Yes, I agree with Hayden that we should provide "crypto agility"

Yeah, that's really what I meant by "experimental support:" not pushing to prod, just making we were ready to make the transition. Seems like we're in agreement on that.

znewman01 commented 1 year ago

@mnm678 had a good though regarding post-quantum Sigstore!

A lot of the complexity around SPHINCS+ and similar hash-tree-based stateful signature schemes has to do with:

  1. Keeping state at the signer is annoying, but getting around the need for state is complicated.
  2. Using these for many signatures requires big key sizes.

If you have single-use signatures, both of those issues go away! So we could even use something like vanilla Lamport signatures. These are generally considered impractical, because when would you ever use a signature only once? 😄 This has a number of advantages.

Security

The only security assumption made is about the hash algorithm is collision-resistance for a hash function, like SHA256, which we need anyway.

Key/signature sizes

We'd have 8 KiB signatures, 16 KiB private keys, and 16 KiB public keys (for 256-bit hash inputs/outputs; the scale-up to use something like SHA2-384 is perfectly proportional to the number of bits).

If we want a smaller public key size, we can trade off signing/verification time by replacing the public key with a Merkle tree (256 bits). Signatures are bigger by about an order of magnitude, I think.

If we want a smaller private key size, we can introduce another computational assumption and replace the private key with a pseudorandom number generator.

Performance

SHA256 is fast (on my M1 Mac, but this is true for all recent computers):

$ openssl speed sha256 2>&1 | grep "256 size"
Doing sha256 for 3s on 256 size blocks: 10577258 sha256's in 2.97s

So...300ns per hash!

Here's my estimates for performance based purely on crypto ops:.

Operation Phase Time
Key generation Random numbers (16KiB) negligible
512 hashes 140 μs
Signing Concatenating 256 hashes negligible
Verification 256 hashes 70 μs
Equality check over 8KiB negligible

This is substantially faster (3 orders of magnitude) than the numbers reported for SPHINCS+ in the NIST round 3 report (fig. 11, p. 89)

SantiagoTorres commented 1 year ago

Lamport

Not entirely sure either, but we could also consider WOTS+ or so, which I think has implementations laying around.

dlorenc commented 1 year ago

Maybe I'm missing something, but if we were only going to use keys once that would work but it would still leave the root CA using traditional crypto, right?

The root cert signs the leaf cert using it's private key (this has to happen many times) The leaf cert signs the actual artifact using it's private key (this can just happen once)

If we make the second part there use one-time signatures what would we do for the first part?

znewman01 commented 1 year ago

Yeah the one time use was just for client keys. That’s the really important part for the long term—the transparency log binds the keys to a timestamp using symmetric crypto. Rekor/Fulcio need to cut over to (multi use) keys by the time we’re worried about quantum attacks, but user keys should move over before.

On Sep 21, 2022, at 9:35 PM, dlorenc @.***> wrote:

 Maybe I'm missing something, but if we were only going to use keys once that would work but it would still leave the root CA using traditional crypto, right?

The root cert signs the leaf cert using it's private key (this has to happen many times) The leaf cert signs the actual artifact using it's private key (this can just happen once)

If we make the second part there use one-time signatures what would we do for the first part?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.

trishankatdatadog commented 1 year ago

Yeah the one time use was just for client keys. That’s the really important part for the long term—the transparency log binds the keys to a timestamp using symmetric crypto. Rekor/Fulcio need to cut over to (multi use) keys by the time we’re worried about quantum attacks, but user keys should move over before.

Thanks for clarifying. In that context, this is a great finding thanks to a classic result.

znewman01 commented 1 year ago

See this design for building this out: https://docs.google.com/document/d/1-wF1t6lmJO37BzStbyz8ZkDhNm86HyvKZSaDl7uyHxI/edit# (may need to join sigstore-dev@ for access)