secure-systems-lab / dsse

A specification for signing methods and formats used by Secure Systems Lab projects.
https://dsse.dev
Apache License 2.0
66 stars 18 forks source link

Specify DSSE Signature encoding in the Protocol or as a Parameter #49

Open CrossedSecurity opened 2 years ago

CrossedSecurity commented 2 years ago

In the current state, we have DSSE's that contain a signature and the information needed to generate the PAE(message) that gets signed. When a DSSE Verifier is created, you must specify a signature algorithm (e.g. ECDSA), a few other parameters, and a signature encoding scheme (e.g. DER, IEEE_P1363). Unfortunately, if you attempt a DSSE verification using the incorrect signature encoding, it is unlikely that the crypto library is going to tell you that, and it's rather painful to debug.

To avoid ambiguities around what algorithm/parameters/signature encoding was used to sign a DSSE's PAE(message), we should either add these as a requirement in the spec, or provide a place to specify which of them were used. If we decide to add this to the protocol, which should consider which systems are able to produce the encoding format, and/or how difficult of a task converting between them is.

trishankatdatadog commented 2 years ago

Hmm, pretty sure we decided to delegate this job to the application using DSSE. For example, both TUF and in-toto (for which DSSE was proposed) define signature algorithm parameters within the signed message.

@MarkLodato @SantiagoTorres thoughts here?

MarkLodato commented 2 years ago

To clarify, this is purely about encoding, not the algorithm.

@trishankatdatadog You are correct that we explicitly decided not to include a per-message "algorithm" parameter to avoid confusion attacks, learning from the mistakes of JOSE/JWT (converting alg=RS256 to alg=HS256). Instead, we recommend that the algorithm should be decided by the public key type.

But within an algorithm, there are multiple ways to encode the signature. For ECDSA, the signature is logically a pair of positive integers (r, s), and there are two main ways to encode this into a byte sequence:

Furthermore, some libraries include a library-specific prefix to the signature. For example, Tink optionally includes 5-byte prefix: a version field followed by a four-byte hash of the key.

I think we have four options:

Personally, I'd go with (C) make interoperability easier, and for ECDSA in particular choose IEEE P1363. If a system only supports DER but we mandate P1363, the DSSE library could provide a little shim to convert. Plus this avoids the problems on library-specific prefixes.

The only one I don't like is (B) since it makes the envelope more complex.

MarkLodato commented 2 years ago

Oops, as I re-read @CrossedSecurity's initial post, I see now that he did ask about algorithm. I was basing my comment off our earlier private conversation. I think we should scope this issue to just encoding, since as mentioned above, we had previously decided not to transmit algorithm in-band.

trishankatdatadog commented 2 years ago

To clarify, this is purely about encoding, not the algorithm.

I see, thanks for the clarification!

  • (A) Communicate the encoding as part of the public key parameters.

My vote is for this: consistent with our approach for the rest of the metadata (e.g., algorithm parameters) and also allows for flexibility.

CrossedSecurity commented 2 years ago

Right, sorry. I was thinking that if we were to specify or recommend the signature encoding somewhere, the we'd have to pair that with whichever signing algo was used. (RSA doesn't use DER signatures, for instance). But yea we could just infer that from the public key.

(A) is a good solution but could present an issue if somebody signs something with that key using the incorrect signature encoding. This is error-prone because there is no hard-link between what signature encoding is published and what actually happens during signing. At least if this information is published, it's a little easier to debug when that happens.

If we're already doing this with the other algorithm parameters, this would be the easiest path forward.

(C) would also be fine imo, with the caveat being that some KMS's only output certain encodings (e.g. GCP CloudKMS only outputs DER).

trishankatdatadog commented 2 years ago

(A) is a good solution but could present an issue if somebody signs something with that key using the incorrect signature encoding. This is error-prone because there is no hard-link between what signature encoding is published and what actually happens during signing. At least if this information is published, it's a little easier to debug when that happens.

If we're already doing this with the other algorithm parameters, this would be the easiest path forward.

Precisely: if the encoding is published and signed, then that's what the signer should follow. WDYT?