cfrg / draft-irtf-cfrg-opaque

The OPAQUE Asymmetric PAKE Protocol
https://cfrg.github.io/draft-irtf-cfrg-opaque/draft-irtf-cfrg-opaque.html
Other
100 stars 20 forks source link

HKDF-Expand limits #132

Closed chris-wood closed 3 years ago

chris-wood commented 3 years ago

RFC 5869 limits the output length of Expand to 255*HashLen, but the envelope ciphertext has a larger capacity. (In theory, HKDF can produce any number of bytes, but we should abide by the interface limitations.)

kevinlewi commented 3 years ago

Do you have a suggestion for how we should address this? The only way in which our outputs might need to be variable size would be if we support super long key lengths, right? But this doesn't seem like it would happen based on the configurations / ciphersuites we plan to support.

Unless I am missing something?

kevinlewi commented 3 years ago

After some offline discussion: in order to address this, we will limit all opaque key types to be at most 255 bytes instead of 2^16-1.

rot256 commented 3 years ago

That is not enough for a 2048-bit RSA key.

Each public and private key value is an opaque byte string, specific to the AKE protocol in which OPAQUE is instantiated. For example, if used as raw public keys for TLS 1.3 [RFC8446], they may be RSA or ECDSA keys as per [RFC7250]

Does this mean that the spec bans the use of RSA?

chris-wood commented 3 years ago

That is not enough for a 2048-bit RSA key.

This is an interesting point, but as I see it, one could just seed a PRNG with the secret bytes in the envelope. (This is related to #84.) Of course, I Am Not A Cryptographer, so perhaps this is bad practice? 🤷

bytemare commented 3 years ago

Deterministic generation of RSA keys is non-standard, and the tricks and hacks to make it work are quite "roll your own".

kevinlewi commented 3 years ago

@chris-wood : I think the current direction we are taking is to not support RSA keys, which means that the text that @rot256 linked to should be removed. Thoughts?

The group representations we are supporting are listed out in the Configurations section (https://github.com/cfrg/draft-irtf-cfrg-opaque/blob/master/draft-irtf-cfrg-opaque.md#configurations-configurations) and those should all fit within the 255-byte limit.

Edit: Additionally, In response to:

one could just seed a PRNG with the secret bytes in the envelope

I am not so much a fan of allowing for flexibility in what goes in the secret bytes in the envelope, and hence I do not see a need for considering this. In my opinion, applications that wish to use the secrecy provided by OPAQUE can do so with the export_key parameter.

chris-wood commented 3 years ago

@chris-wood : I think the current direction we are taking is to not support RSA keys, which means that the text that @rot256 linked to should be removed. Thoughts?

Yep, I think that's right. RSA is incompatible with 3DH anyway. For the SIGMA-I variant, we would need to make it clear that the envelope structure or its encryption mechanism would need to change. (Given that we don't have a use case for SIGMA-I right now, I think I'm fine with this outcome.)

@hugokraw, one consequence of the change proposed in this issue is that it would limit OPAQUE-EA instantiations to non-RSA keys. I quite like this as a forcing function away from RSA, though I'm curious to hear your thoughts.

I am not so much a fan of allowing for flexibility in what goes in the secret bytes in the envelope, and hence I do not see a need for considering this. In my opinion, applications that wish to use the secrecy provided by OPAQUE can do so with the export_key parameter.

I wasn't envisioning this being a flexible thing. I was suggesting that, rather than the envelope store the raw private key, it store a seed used to deterministically derive the key. In any case, that's a separate change if we are to make it, likely as part of #84.

hugokraw commented 3 years ago

RFC 5869 limits the output length of Expand to 255*HashLen, but the envelope ciphertext has a larger capacity. (In theory, HKDF can produce any number of bytes, but we should abide by the interface limitations.)

This is an example of a tradeoff between simplicity and flexibility for future use. I am all for simplicity but extensibility mechanisms have always proved useful in long-lived protocols. The discussion on the info fields we are having here illustrates this issue.

Philosophy apart, one can generate more than 255 octets with HKDF-Expand by calling HKDF-Expand(prk, "Block 1"), HKDF-Expand(prk, "Block 2"), etc., where each block contains 255 bytes.

hugokraw commented 3 years ago

After some offline discussion: in order to address this, we will limit all opaque key types to be at most 255 bytes instead of 2^16-1.

This may not be sufficient for some post-quantum algorithms, including lattice-based constructions and hash-based signatures, or RSA (even 2048 needs 256 bytes). There is always the option to only encrypt a seed to a PRG/PRF that generates the randomness from which one generates the private key (e.g., for RSA you would encrypt two 256 values whose PRG expansions give you the primes that form your the private key). More precisely and more generally, the encrypted value is a seed to the key generation procedure). How time consuming this generation is depends on the PK scheme.
Note that long public keys (that cannot be compressed as in the case of private keys above) is not a problem since they do not need to be encrypted.

hugokraw commented 3 years ago

Deterministic generation of RSA keys is non-standard, and the tricks and hacks to make it work are quite "roll your own".

I sent the previous response without reading the full thread so I am repeating some stuff people were already saying. The issue is not deterministic generation but expansion of a seed to a full key generation. Note that any key generation starts from a seed to a PRG so this technique is quite universal. Where the tricks come into play is for reducing the cost of key generation (that in our case happens online during user authentication). It is not the end of the world and hopefully not too much of an issue with RSA (whose coming the the end of its life) but it can be an issue with lattice based and other PQ algorithms.

bytemare commented 3 years ago

Yes, you're totally right, I was wrong in using that term, as I meant exactly what you described marvelously good. Thank you !

As for such a key reproduction for RSA, do you know about an acceptable method to reduce the cost to something acceptable if it reveals to be an issue ? (E.g. run in-browser on low power device)

hugokraw commented 3 years ago

I am not an expert on what implementations do these days for selecting random primes. But if we assume that primes are chosen by choosing a random value and testing for primality, then at time of password registration you would derive a PRF key P from rwdU. Then, test all values PRF_P(1), PRF_P(2), ... and choose the first two that are primes as your RSA primes. You can make the process of reconstructing the primes very efficient if you put in the envelope the positions i and j where the primes were found (these do not need to be encrypted).

bytemare commented 3 years ago

This was the method I was referencing to, but I'm not aware how "standard" it is (if it needs to be) or about reference paper on this. (Also, I'm always wary around RSA)

kevinlewi commented 3 years ago

In #121, the proposal is actually to use fixed-length keys instead of variable ones. This means that rather than limiting the maximum key size to be 255 bytes as I wrote earlier, we would instead use a parameter (e.g. Nk) to denote the key size, which could be set by the particular ciphersuite.

I believe this means that we can defer the conversation of whether or not PQ algorithms or RSA would be restricted by this 255-byte limit, since such future specifications could simply set the Nk value to whatever is appropriate.

See the new comment I appended to the PR here (https://github.com/cfrg/draft-irtf-cfrg-opaque/pull/137#issuecomment-774785807) which should incorporate this.

hugokraw commented 3 years ago

I thought that the concern was that HKDF limits the output to 255HashLen and one could presumably have some algorithms that would require a larger private key (but see below). How does fixing the length of the key to some parameter addresses this issue? if Nk is shorter than 255HashLen then there wasn't a problem to start with and if it is larger then you still have an issue with HKDF not producing enough bits for the encryption pad. I must be misunderstanding this.

Btw, I got confused earlier in this thread when I said we have a limit of 255 bytes for the HKDF pad (hence for the private key). The limit is 255HashLen bytes which for a minimal hashlen of 32 bytes gives 25532 bytes =65,280 bits or almost 65KB which does not seem too restrictive.

kevinlewi commented 3 years ago

So, the problem that we are discussing originates from lines like this:

struct {
  opaque client_private_key<1..2^16-1>;
} SecretCredentials;

It says that the client_private_key value is allowed to be of any length between 1 and 2^16-1 bytes. However, because this value is eventually fed into an HKDF computation, if it were to actually be 2^16-1 bytes long, we would run into conflict with the HKDF limitations.

So, a couple of options for resolving this issue:

Also, restricting the value to be exactly Nk bytes long means that we no longer have to length-prefix these currently variable-length values (thus saving 1-2 bytes on the wire).

Hope that makes sense @hugokraw!

hugokraw commented 3 years ago

Your solution does not address the real issue here which is the limit of 8160 bytes imposed by HKDF. To me it makes more sense to have this as a limit instead of asking to configure the exact number. The latter reduces flexibility that future applications may want to have without much gain. But, as always, for decisions that do not have a significant security/cryptography implication, I delegate to you guys.

chris-wood commented 3 years ago

@hugokraw's suggestion here should give us a way to specify a HKDF wrapper that avoids the length issue. I think the best thing is to just bake this into the spec, but underneath a new generic KDF interface. I'll work on a PR for this!

hugokraw commented 3 years ago

Do you think it is worth adding this additional level of abstraction? It is not very probable that this length issue will be encountered and a suggestion about this simple workaround should be enough, don't you think?

hugokraw commented 3 years ago

PS: As you see, after preaching for generality, I am not recommending against it. You can't win...

chris-wood commented 3 years ago

The "problem" is that a suggestion isn't specific enough. The only ways forward I see are to say one of the following in the spec:

  1. If L < limit, don't do the workround. Else do the workaround.
  2. Always do the workaround, handling all L values.

I'm suggesting we go with (2), since it makes things future proof and also removes this branch in code with very minimal overhead.

hugokraw commented 3 years ago

If you want to specify this then the truly clean way of doing it is to use HKDF to derive a seed to a PRG and apply a PRG that is not limited in output length. But this would require another primitive and its instantiation. Your suggestion is equivalent to specifying a way to use HKDF as an unbounded-length PRG. (These may be too academic remarks, feel free to go the best way engineering calls for.)

chris-wood commented 3 years ago

If you want to specify this then the truly clean way of doing it is to use HKDF to derive a seed to a PRG and apply a PRG that is not limited in output length. But this would require another primitive and its instantiation. Your suggestion is equivalent to specifying a way to use HKDF as an unbounded-length PRG.

Yep, exactly! That's what I planned on doing. I'll try and prioritize a PR for this today.