theupdateframework / python-tuf

Python reference implementation of The Update Framework (TUF)
https://theupdateframework.com/
Apache License 2.0
1.63k stars 273 forks source link

Signature Standardization and Denoting Hash Algorithms Used #425

Closed awwad closed 7 years ago

awwad commented 7 years ago

There are two concerns here: first, potentially inconsistent (and therefore confusing in a reference implementation) behavior, and second, need for signature metadata to provide adequate data to check the signature (TAP). I should add that since this is not my expertise, I could use some checking on this.

Consistent Signing Behavior

The different signing processes employed in the TUF code (implemented in ed25519_keys.py, pycrypto_keys.py, and pyca_crypto_keys.py) seem to have different behavior in terms of what is fed to the underlying signing libraries. Consequently, IIUC, a call to keys.create_signature() may sign different data depending on the type of key provided. Knowing the standard signing algorithm used (e.g. RSASSA-PKCS1-v1_5) still doesn't provide enough information to know how to validate our signatures.

Specifically, there seems to be an extra hashing step (always SHA-256) in pycrypto_keys.create_rsa_signature() (before the underlying Crypto library does its work (Crypto.Signature.PKCS1_PSS) of following the standard signature algorithm), as compared to ed25519_keys.create_signature(), which does not perform this extra hash step before calling the nacl library's sign() method. pycrypto_keys.verify_rsa_signature() performs the same SHA-256 hash before validating the signature via the Crypto library.

My uncertainty on this first issue, from lack of experience, lies around whether or not these disparate steps are actually standard and obvious to anyone employing those algorithms (It doesn't seem obvious to me, at least.), or if they are important for some algorithms and not for others. (In the latter case, it should probably still be standard behavior across all.)

My thoughts on how this should work:

  1. User of keys.py should pass in raw data itself (and possibly pass in hash method, optionally)
  2. keys.py should always take the hash of the given data and discard the data itself in favor of this hash
  3. keys.py should pass that (hash) to the underlying TUF crypto modules (ed25519_keys.py, pyca_crypto_keys.py, pycrypto_keys.py).
  4. Those underlying modules should not take the hash (again) of data they're given before calling the standard crypto libraries who will do the actual signing.

This assumes that we think that actually taking the hash first is ever beneficial in terms of speed. If all the crypto libraries we're going to use are unaffected by this (say, because they hash first, too), then we should not even do step 2 above and instead provide the full data itself to the standard crypto libraries.

Additional Information Necessary in Signature Metadata

If we are going to hash data first (step 2 above), it should be consistent across signing code, as discussed above, and it may be that the hash algorithm should be included in the signature metadata (so that the client can replicate this extra step and confirm the signature) This requires a tweak to the TUF spec.

Signature metadata before:

       { "signed" : ROLE,
         "signatures" : [
            { "keyid" : KEYID,
              "method" : METHOD,
              "sig" : SIGNATURE }
            , ... ]
       }

Signature metadata after:

       { "signed" : ROLE,
         "signatures" : [
            { "keyid" : KEYID,
              "method" : METHOD,
              "hash_alg": HASHTYPE,        # e.g. sha256, sha512
              "sig" : SIGNATURE }
            , ... ]
       }

If there are also other properties/parameters of the signing process that the party validating the signature needs to know, you could argue that those should be included as well.

Thoughts?

awwad commented 7 years ago

(This emerges from me trying to work through how Uptane should handle dealing with signatures over BER instead, and whether or not to hash an extra time.)

JustinCappos commented 7 years ago

I'm fine with this. Yes, the hash algorithm needs to be specified.

On Wed, Feb 1, 2017 at 12:56 PM, Sebastien Awwad notifications@github.com wrote:

(This emerges from me trying to work through how Uptane should handle dealing with signatures over BER instead, and whether or not to hash an extra time.)

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/theupdateframework/tuf/issues/425#issuecomment-276730572, or mute the thread https://github.com/notifications/unsubscribe-auth/AA0XD8CO1G2QsgSlfsfeC8xVLnSMLwUcks5rYMdJgaJpZM4L0Lej .

heartsucker commented 7 years ago

In the other direction, we had some respected (but unmentionable at this time) auditors review our implementation, and they suggested that the method should not be in the signatures portion of the signed metadata because that puts into the attacker controlled space.

I think the other direction might make sense. Role keys are defined with both the key type and full method. That method, and only that method, can be used for verifying metadata.

For example, a root.json would at present be.

{
  "signatures": [
    { "keyid": "123",
       "method": "rsassa-pss-sha256",
       "sig": "abc"
    },
  ],
  "signed": {
    ...
    "keys": {
      "123": {
        "keytype": "rsa",
        "keyval": {
          "public": "----- BEGIN RSA PUBLIC KEY ---- <etc>"
       }
    }
  }
}

And it would be changed to:

{
  "signatures": [
    { "keyid": "123",
       "sig": "abc"
    },
  ],
  "signed": {
    ...
    "keys": {
      "123": {
        "keytype": "rsa",
        "scheme": "rsassa-pss-sha256",
        "keyval": {
          "public": "----- BEGIN RSA PUBLIC KEY ---- <etc>"
       }
    }
  }
}
heartsucker commented 7 years ago

But if you guys don't agree with this idea, I think it doesn't make sense to have both method and hash_alg fields. I think the spec should say "the method should be a full description of the signature scheme used," so instead of having:

{ "method": "rsassa-pss",
  "hash_alg": "sha256" }

You would just have:

{ "method": "rsassa-pss-sha256" }

This would also reduce code complexity for implementations a bit since there would 1 less field to serialize / deserialize, and less code paths for matching methods and hashes together.

vladimir-v-diaz commented 7 years ago

This issue, where a signature algorithm can potentially be matched to an incorrect key, was raised by the NCCgroup in their security audit of Notary (Docker's implementation of TUF). Subsection 3.3 of the linked PDF covers the issue in more detail. According to the PDF, Docker decided to add sanity checks in Notary to verify that the signature method listed in the "signatures" field is valid for its trusted key.

I agree that it is safer to exclude the signature method from the "signatures" field of metadata. What we need to decide is where to store the signature method that is used when signatures are created. At the moment, keys that are delegated or listed in Root are labelled by keytype (e.g., rsa, ed25519).

There are three potential solutions in my opinion:

(1) In the "keytype" field, the signature method should be specified as well (e.g., rsa-rsassa-pss). (Although it is strange to do so, since an RSA key can be used to generate multiple, different types of signatures.)

(2) Add a separate "method" field, alongside the "keytype" as you've proposed.

(3) Associate a keytype with a fixed signature method. That is, any RSA key listed in metadata is used by the implementation/adoption to only generate rsassa-pss signatures. Perhaps the signature method can be configurable via a client-side setting.

I am leaning towards option (3). It's simpler, easier to implement, and avoids having to do sanity checks. The specification states that the reference implementation supports rsassa-pss and ed25519, but the specification accommodates any key type and signing method (the framework isn't restrictive in this regard).

What do others think? @JustinCappos @trishankkarthik @awwad

awwad commented 7 years ago

@heartsucker has a good point. Vlad and I talked through this a bit.

There are two potential problems:

I don't know whether or not intra-type is a problem. (I'd love to know, btw.) The security audit for Docker that @vladimir-v-diaz linked happens to mention only cross-type (use of an ed25519 method for an rsa key). Sanity checks making sure that the method provided in the signature is valid for the key type provided in the delegation are only adequate if intra-type is not a problem, because they will not protect you against attacker instructing you to use a different method for the same key type.

Sanity checks also have the additional problem of being easy for an implementer to fail to do. Either of those two problems make sanity checks unappealing for us, I think.

Option (3) seems like an adequate approach that an implementer can choose to do, but not something that we can choose to demand of implementers, who may need to support broader flexibility (Docker, Uptane).

I think we should go with either (1) or (2), therefore: all information about the signature method should be stored in the delegation (within the delegating role's "signed" component), which results in us removing the method field from the signature (as would (3), btw). This has the disadvantage of meaning that if a delegatee is going to change their signing method (e.g. pkcs to pss), they will need the delegating role to sign new metadata specifying the new method for that key.

As for choosing between (1) and (2) - using the same field in the delegation for method and keytype or using two separate fields - I don't have a preference. "keytype" and "scheme" make sense to me. It makes sense to collapse hash into scheme, as well, though I'm somewhat worried about it being inferred to refer to the type of hash used internally in one of the signing methods rather than the type of hashing done before the signing method is called.

Thoughts?

heartsucker commented 7 years ago

This has the disadvantage of meaning that if a delegatee is going to change their signing method (e.g. pkcs to pss), they will need the delegating role to sign new metadata specifying the new method for that key.

I think this is a good thing for reasons of compliance and higher security. It would prevent someone from downgrading a key from a secure signing scheme to a less secure one without alerting the delegator. I really think this is a good thing in that the delegator says "you must use exactly this signing scheme that I approve of for all signatures."

It makes sense to collapse hash into scheme, as well, though I'm somewhat worried about it being inferred to refer to the type of hash used internally in one of the signing methods rather than the type of hashing done before the signing method is called.

I think this would be up to the implementors to figure out a name for the method that uniquely identifies the signature and hash and all parameters. E.g., rsassa-pss-scrypt-256bits-n10-r8-p1 would fully specify how to calculate the signature.

I'm also not sure that we need to worry much about servers and clients being able use keys for any signing scheme. For any given role, what advantage is gained by letting Key A arbitrarily select one of some N signing schemes to use each time?

JustinCappos commented 7 years ago

[Jetlagged / in another meeting, so please forgive typos...]

One of the issues that we need to consider is that many keys in TUF are intended to be very rarely used. For example PyPI has delegated targets roles such as claimed, unclaimed, etc. roles that delegate to thousands of project keys. https://www.python.org/dev/peps/pep-0458/

Suppose that in the future, some projects want to move to a more secure signing scheme? A design which would require the PyPI admins to re-sign the metadata for each project seems concerning. Similarly, having a 'flag day' where everyone must re-sign everything is also impractical.

I understand and agree with your concern about letting clients use any signing scheme. Perhaps there is a sane way to restrict what schemes can be used?

It clearly would be possible to extend the delegation mechanism to support flagging by signing scheme (as Vlad was suggesting). This could be optional and fail to restrict if not stated.

It also would be possible to have the root role specify this as part of the configuration.

I agree though that the type of signature should be protected...

Anyways, let's discuss more.

On Sat, May 6, 2017 at 5:36 AM, heartsucker notifications@github.com wrote:

This has the disadvantage of meaning that if a delegatee is going to change their signing method (e.g. pkcs to pss), they will need the delegating role to sign new metadata specifying the new method for that key.

I think this is a good thing for reasons of compliance and higher security. It would prevent someone from downgrading a key from a secure signing scheme to a less secure one without alerting the delegator. I really think this is a good thing in that the delegator says "you must use exactly this signing scheme that I approve of for all signatures."

It makes sense to collapse hash into scheme, as well, though I'm somewhat worried about it being inferred to refer to the type of hash used internally in one of the signing methods rather than the type of hashing done before the signing method is called.

I think this would be up to the implementors to figure out a name for the method that uniquely identifies the signature and hash and all parameters. E.g., rsassa-pss-scrypt-256bits-n10-r8-p1 would fully specify how to calculate the signature.

I'm also not sure that we need to worry much about servers and clients being able use RSA keys for any signing scheme. For any given role, what advantaged is gained by letting Key A arbitrarily select one of some N signing schemes to use each time?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/theupdateframework/tuf/issues/425#issuecomment-299628223, or mute the thread https://github.com/notifications/unsubscribe-auth/AA0XDw9cRB9ThnvoaGljFcDHXgmNXDnwks5r3D8mgaJpZM4L0Lej .

heartsucker commented 7 years ago

A design which would require the PyPI admins to re-sign the metadata for each project seems concerning.

I don't really see a way around this. However, this likely wouldn't happen very often and it would make sense that a project would do this with a delay. For example, they request that their signing scheme is updated from X to Y. They sign their current metadata with X and with Y, then submit both to the project. The next time the delegations are updated, the project starts serving metadata signed with Y.

Even if schemes are restricted, we'd still have something in the attackers control or we'd have to do something like:

for scheme in role.approved_schemes():
    if pub.verify(scheme, metadata, sig):
        return

raise CryptoError

Which seems like it could be a source of vulnerabilities, not to mention unnecessary computation.

vladimir-v-diaz commented 7 years ago

For example, they request that their signing scheme is updated from X to Y. They sign their current metadata with X and with Y, then submit both to the project. The next time the delegations are updated, the project starts serving metadata signed with Y. -heartsucker

These sorts of approaches might add complexity to threshold signatures and multi-signature trust. Suppose role A delegates to B, and A requires that B's metadata be signed by at least 2 out of 3 keys.

In the scenario you present, the client will have to make sure that X and Y are treated as one unique key. Same key, different signing schemes. I've come across bugs of this kind when using signing schemes that can generate multiple, different signatures with the same key.

How do we deal with metadata that uses a signing scheme that has since been deprecated? An idea we present in the Python/pip proposals is to have a PyPI role sign for abandoned roles. Similarly, we can have PyPI sign for projects whose metadata is out of date due to deprecated signing schemes, or until the project uses a supported signing scheme.

I understand and agree with your concern about letting clients use any signing scheme. Perhaps there is a sane way to restrict what schemes can be used? -JustinCappos

Do you think that one repository should support multiple signing schemes? For example, on the PyPI repository a role can use a key with any signing scheme in [X, Y, Z]. Or should a repository be free to choose which signing scheme is used by all metadata on the repository?

JustinCappos commented 7 years ago

Okay, I've been thinking more on this and I've come around more to heartsucker's suggestion of having the key type and the signature scheme being specified in the delegation. To me this is effectively the same as having a different type of key. If you change the algorithm ('rsa' -> 'dsa'), you are expected to change the key ('a4290fcd...' -> '94f3352...'). So, equivalently, if you change the scheme for signing, you change the key / algorithm.

Vlad, I'm not sure about how this relates to either threshold or multi-role delegations? It just means there is one entry for a key / signing scheme / alg that needs to change right? The others should still work, as I understand it.

Once we all agree, I think we could do a quick TAP on this. This likely will be TAP 9, because TAP 8 should be submitted tomorrow.

Justin

On Tue, May 9, 2017 at 5:01 PM, Vladimir Diaz notifications@github.com wrote:

For example, they request that their signing scheme is updated from X to Y. They sign their current metadata with X and with Y, then submit both to the project. The next time the delegations are updated, the project starts serving metadata signed with Y. -heartsucker

These sorts of approaches might add complexity to threshold signatures and multi-signature trust. Suppose role A delegates to B, and A requires that B's metadata be signed by at least 2 out of 3 keys.

In the scenario you present, the client will have to make sure that X and Y are treated as one unique key. Same key, different signing schemes. I've come across bugs of this kind when using signing schemes that can generate multiple, different signatures with the same key.

How do we deal with metadata that uses a signing scheme that has since been deprecated? An idea we present in the Python/pip proposals is to have a PyPI role sign for abandoned roles. Similarly, we can have PyPI sign for projects whose metadata is out of date due to deprecated signing schemes, or until the project uses a supported signing scheme.

I understand and agree with your concern about letting clients use any signing scheme. Perhaps there is a sane way to restrict what schemes can be used? -JustinCappos

Do you think that one repository should support multiple signing schemes? For example, on the PyPI repository a role can use a key with any signing scheme in [X, Y, Z]. Or should a repository be free to choose which signing scheme is used by all metadata on the repository?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/theupdateframework/tuf/issues/425#issuecomment-300300159, or mute the thread https://github.com/notifications/unsubscribe-auth/AA0XD7jisFVDXkChq1Z2iMD6oSiqCnX3ks5r4NQugaJpZM4L0Lej .

heartsucker commented 7 years ago

These sorts of approaches might add complexity to threshold signatures and multi-signature trust. Suppose role A delegates to B, and A requires that B's metadata be signed by at least 2 out of 3 keys.

Sorry, I wasn't clear here. Let me try again.

Let's say a project has some delegation D from the targets role. At targets.json version 1, D has key K that is assigned to use the signing scheme S1. D uses K to sign delegation.json with S1. D decides to change to signing scheme S2 and notifies the target role. Until the time where targets.json version 2 comes out that authorizes usage of S2, D creates two copies of delegation.json and submits both (using K to do S1 and S2). This means that the project supports valid metadata for the old and new schemes until target.json version 2 comes out. Then D will only sign one copy of delegation.json with K using S2 moving forward.

If you change the algorithm ('rsa' -> 'dsa'), you are expected to change the key ('a4290fcd...' -> '94f3352...').

I would disagree on this because calculating key IDs is already a little annoying (extra whitespace on PEMs, PKCS#1 vs. PKCS#8 vs. SPKI, hex vs. b32 vs. b64, etc.). I think the key ID would be able to remain static for different signature schemes given the (hopeful) clarification above.

heartsucker commented 7 years ago

How do we deal with metadata that uses a signing scheme that has since been deprecated? An idea we present in the Python/pip proposals is to have a PyPI role sign for abandoned roles. Similarly, we can have PyPI sign for projects whose metadata is out of date due to deprecated signing schemes, or until the project uses a supported signing scheme.

I think what would happen is when the target role rotates the signing scheme, if the delegation doesn't support it, you would take the PyPI approach and just have PyPI itself sign only old packages until a time where the developer can start signing their own packages again. Since the old packages were considered ok at some point when the signing scheme was allowed, it seems ok to let them remain available.

JustinCappos commented 7 years ago

So you'd like the keyids to remain the same, while the signature algorithm changes. I understand and agree that it would be possible to sign the same targets file with multiple keys. Vlad: Are additional signatures supported / ignored in TUF, or would this need to be added?

I'm definitely in favor of having this information inside the signed portion of the data. I'm also mildly in favor of having the signing key type and hash algorithm combined.

Also, on a related note, should the key type be used to compute the keyid somehow? It seems a really short RSA key could be mistaken for a ed25519 key or that a particularly oddly structured ed25519 key could be a valid RSA key. There are likely other cases. (I am worried about 'colliding' files where people are able to replace alice's targets file with one of their own by using a different key type, etc.)

On Wed, May 10, 2017 at 3:15 AM, heartsucker notifications@github.com wrote:

These sorts of approaches might add complexity to threshold signatures and multi-signature trust. Suppose role A delegates to B, and A requires that B's metadata be signed by at least 2 out of 3 keys.

Sorry, I wasn't clear here. Let me try again.

Let's say a project has some delegation D from the targets role. At targets.json version 1, D has key K that is assigned to use the signing scheme S1. D uses K to sign delegation.json with S1. D decides to change to signing scheme S2 and notifies the target role. Until the time where targets.json version 2 comes out that authorizes usage of S2, D creates two copies of delegation.json and submits both (using K to do S1 and S2). This means that the project supports valid metadata for the old and new schemes until target.json version 2 comes out. Then D will only sign one copy of delegation.json with K using S2 moving forward.

If you change the algorithm ('rsa' -> 'dsa'), you are expected to change the key ('a4290fcd...' -> '94f3352...').

I would disagree on this because calculating key IDs is already a little annoying (extra whitespace on PEMs, PKCS#1 vs. PCKS#8 vs. SPKI, hex vs. b32 vs. b64, etc.). I think the key ID would be able to remain static for different signature schemes given the (hopeful) clarification above.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/theupdateframework/tuf/issues/425#issuecomment-300396544, or mute the thread https://github.com/notifications/unsubscribe-auth/AA0XD3vq5zVMEwAw9z1L-hkNCDs7pFWFks5r4WQrgaJpZM4L0Lej .

vladimir-v-diaz commented 7 years ago

Vlad: Are additional signatures supported / ignored in TUF, or would this need to be added?

Yes, you can add multiple signatures to the "signatures" portion of metadata. Each signature entry can use a different signing method. Here is what that currently looks like:

{
 "signatures": [
  {
   "keyid": "5602f4df...",
   "method": "RSASSA-PSS",
   "sig": "7965740b0a73d..."
  },
  {
   "keyid": "78da92...",
   "method": "ed25519",
   "sig": "3498daff21efb..."
   }
],
...
}

Also, on a related note, should the key type be used to compute the keyid somehow?

At the moment, we do use the keytype to compute the keyid.

vladimir-v-diaz commented 7 years ago

Vlad, I'm not sure about how this relates to either threshold or multi-role delegations? It just means there is one entry for a key / signing scheme / alg that needs to change right? The others should still work, as I understand it.

Okay, so you are saying that we should treat a (keytype + signing scheme + hash algorithm) combination as one unique key. A developer can have access to a single _rsakey.pem file and generate multiple, different signatures with that one key to satisfy a threshold.

We should use a different name for this new view of a key (instead of "keytype") to avoid confusion.

Even if schemes are restricted, we'd still have something in the attackers control or we'd have to do -heartsucker


for scheme in role.approved_schemes():
if pub.verify(scheme, metadata, sig):
return

raise CryptoError



If the keyid is listed in a signature entry, we can simply lookup that trusted keyid's (key + signing method + algo) combination and validate the signature against that.  I do not think we'd need to iterate the approved_schemes.
heartsucker commented 7 years ago

Are additional signatures supported / ignored in TUF, or would this need to be added?

Allowing a key to sign piece of metadata twice seems contradictory to removing the method field from the signature object into the role definition object. If the objective is to remove method from the attacker controlled space, a key can't sign a piece of metadata with two schemes because there would be no way to know which scheme was used, and we get back to the problem of taking N signatures and matching them and M approved signing schemes. This feels wrong to me.

It seems a really short RSA key could be mistaken for a ed25519 key or that a particularly oddly structured ed25519 key could be a valid RSA key.

I think we're ok because the spec says that RSA keys have to be 2048 bits or more and Ed25519 public keys are much shorter (32 vs. 256 bytes).

I am worried about 'colliding' files where people are able to replace alice's targets file with one of their own by using a different key type, etc.

I think this is mitigated by client side check to ensure that a key has the correct format and a matching signing scheme. For example, let's say that a repo has a key K1 of type RSA with scheme RSASSA-PSS-SHA256 with hash / key ID "123...". Eve generates an Ed25519 key K2 with hash / key ID "123...". Eve uses this key to generate a bad signature. The client would see that in the trusted metadata the key must be parsed as an RSA key, which would fail. Thus, the whole verification would fail. If we're at the point where Eve can generate keys with public components with colliding hashes, we're in a very bad place and TUF probably can't help us.

Okay, so you are saying that we should treat a (keytype + signing scheme + hash algorithm) combination as one unique key. A developer can have access to a single rsa_key.pem file and generate multiple, different signatures with that one key to satisfy a threshold.

This is a good point of why a key ID should be calculated from only the public component of a key. I mentioned it for convenience, but it actually is a vulnerability given the current spec. It could be possible to add this in to the spec that a client needs to check that all key-id-signing-scheme identifiers used to sign a piece of metadata point to a unique set of keys (e.g., if KIDSS "123" and KIDSS "456" both point to KeyId "abc" then the client would reject it). This seems unnecessarily complicated for this problem. (see further down)

If the keyid is listed in a signature entry, we can simply lookup that trusted keyid's (key + signing method + algo) combination and validate the signature against that. I do not think we'd need to iterate the approved_schemes.

Correct, there should be a single look up at returns Some(key, scheme) or None which means that to know the scheme used for any given signature, each key can be used for exactly one signing scheme because we can't rely on the signatures section telling us anything about the scheme (since this is in attacker controlled space).

The main problem this complexity seems to be solving (allowing multiple signatures from a given key while swapping between two signing schemes) seems like a non-problem. If a user loses their key or changes to a new key type (RSA -> Ed25519), then the delegator needs to sign new metadata to update this. If we treat the case of switching from, say, PKCS#1 to SSA-PSS for some RSA key exactly the same as switching from RSA key 1 to RSA key 2 (or RSA key 1 to Ed25519 key 2), then we have solved the problem.

I think we're overthinking it because while it's technically possible to allow any key to do use supported scheme, it seems like a feature that isn't needed. By restricting the scheme to exactly 1 and removing it from the attacker controlled space, we have actually reduced the complexity and number of ways an attacker can influence how we process data. In some sense we lost the feature of allowing signers to use the scheme du jour, but I don't think allowing that actually gets us any benefits.

vladimir-v-diaz commented 7 years ago

On Wed, May 10, 2017 at 12:56 PM heartsucker notifications@github.com wrote:

Are additional signatures supported / ignored in TUF, or would this need to be added?

Allowing a key to sign piece of metadata twice seems contradictory to removing the method field from the signature object into the role definition object.

The reason why we allow multiple signature entries is to support multi-signature trust and thresholds. Multiple developers or maintainers can independently sign a single piece of metadata to meet some threshold, say 2 out of 3 signatures must be present for that particular metadata to be valid.

Maybe I misunderstood Justin's question.

If the objective is to remove method from the attacker controlled space, a

key can't sign a piece of metadata with two schemes because there would be no way to know which scheme was used, and we get back to the problem of taking N signatures and matching them and M approved signing schemes. This feels wrong to me.

It seems a really short RSA key could be mistaken for a ed25519 key or that a particularly oddly structured ed25519 key could be a valid RSA key.

I think we're ok because the spec says that RSA keys have to be 2048 bits or more and Ed25519 public keys are much shorter (32 vs. 256 bytes).

I am worried about 'colliding' files where people are able to replace alice's targets file with one of their own by using a different key type, etc.

I think this is mitigated by client side check to ensure that a key has the correct format and a matching signing scheme. For example, let's say that a repo has a key K1 of type RSA with scheme RSASSA-PSS-SHA256 with hash / key ID "123...". Eve generates an Ed25519 key K2 with hash / key ID "123...". Eve uses this key to generate a bad signature. The client would see that in the trusted metadata the key must be parsed as an RSA key, which would fail. Thus, the whole verification would fail. If we're at the point where Eve can generate keys with public components with colliding hashes, we're in a very bad place and TUF probably can't help us.

Okay, so you are saying that we should treat a (keytype + signing scheme + hash algorithm) combination as one unique key. A developer can have access to a single rsa_key.pem file and generate multiple, different signatures with that one key to satisfy a threshold.

This is a good point of why a key ID should be calculated from only the public component of a key. I mentioned it for convenience, but it actually is a vulnerability given the current spec. It could be possible to add this in to the spec that a client needs to check that all key-id-signing-scheme identifiers used to sign a piece of metadata point to a unique set of keys (e.g., if KIDSS "123" and KIDSS "456" both point to KeyId "abc" then the client would reject it). This seems unnecessarily complicated for this problem. (see further down)

If the keyid is listed in a signature entry, we can simply lookup that trusted keyid's (key + signing method + algo) combination and validate the signature against that. I do not think we'd need to iterate the approved_schemes.

Correct, there should be a single look up at returns Some(key, scheme) or None which means that to know the scheme used for any given signature, each key can be used for exactly one signing scheme because we can't rely on the signatures section telling us anything about the scheme (since this is in attacker controlled space).

The main problem this complexity seems to be solving (allowing multiple signatures from a given key while swapping between two signing schemes) seems like a non-problem. If a user loses their key or changes to a new key type (RSA -> Ed25519), then the delegator needs to sign new metadata to update this. If we treat the case of switching from, say, PKCS#1 to SSA-PSS for some RSA key exactly the same as switching from RSA key 1 to RSA key 2 (or RSA key 1 to Ed25519 key 2), then we have solved the problem.

I think we're overthinking it because while it's technically possible to allow any key to do use supported scheme, it seems like a feature that isn't needed. By restricting the scheme to exactly 1 and removing it from the attacker controlled space, we have actually reduced the complexity and number of ways an attacker can influence how we process data. In some sense we lost the feature of allowing signers to use the scheme du jour, but I don't think allowing that actually gets us any benefits.

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/theupdateframework/tuf/issues/425#issuecomment-300545829, or mute the thread https://github.com/notifications/unsubscribe-auth/ADW5c69_ahIW9-fhoc9V1heTHrSuiwrPks5r4ewygaJpZM4L0Lej .

--

vladimir.v.diaz@gmail.com PGP fingerprint = ACCF 9DCA 73B9 862F 93C5 6608 63F8 90AA 1D25 3935

heartsucker commented 7 years ago

The reason why we allow multiple signature entries is to support multi-signature trust and thresholds. Multiple developers or maintainers can independently sign a single piece of metadata to meet some threshold, say 2 out of 3 signatures must be present for that particular metadata to be valid.

I think he meant is it possible for one key to use two schemes to sign a piece of metadata twice when changing from Scheme 1 to Scheme 2, and I was saying that it sounds like unnecessary complexity, and we shouldn't treat changing signing schemes any differently than rotating keys (if we use the proposal of moving method into the role definition).

JustinCappos commented 7 years ago

So, going back to the actual proposal, are you still in favor of your originally proposed change (see below)? Any tweaks?

We should probably get a short TAP together on this and let others comment. I'm convinced there is a problem and I would be happy to have others with a TUF deployment to weigh in before we discuss too much further.

Thanks, Justin P.S. If you want someone to help with the TAP, let us know. Someone from our side can help to pitch in.

{

"signatures": [ { "keyid": "123", "method": "rsassa-pss-sha256", "sig": "abc" }, ], "signed": { ... "keys": { "123": { "keytype": "rsa", "keyval": { "public": "----- BEGIN RSA PUBLIC KEY ---- " } } } } And it would be changed to: { "signatures": [ { "keyid": "123", "sig": "abc" }, ], "signed": { ... "keys": { "123": { "keytype": "rsa", "scheme": "rsassa-pss-sha256", "keyval": { "public": "----- BEGIN RSA PUBLIC KEY ---- " } } } }

heartsucker commented 7 years ago

@JustinCappos Yes, I am still behind that original proposal. I can write a TAP and submit it as a PR to that repo, and from there y'all and the community can review and modify it.

Edit: for the curious. https://github.com/heartsucker/taps/tree/tap9

JustinCappos commented 7 years ago

Okay, sounds good! We'll be looking forward to TAP 9!

Justin

On Thu, May 11, 2017 at 10:00 AM, heartsucker notifications@github.com wrote:

@JustinCappos https://github.com/justincappos Yes, I am still behind that original proposal. I can write a TAP and submit it as a PR to that repo, and from there y'all and the community can review and modify it.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/theupdateframework/tuf/issues/425#issuecomment-300798159, or mute the thread https://github.com/notifications/unsubscribe-auth/AA0XD54xoRiHUrxcofRpNs3ox_XX3mpeks5r4xRlgaJpZM4L0Lej .

vladimir-v-diaz commented 7 years ago

So we eventually settled on TAP 9. It has been accepted and implemented, so I'm closing this issue!