notaryproject / notary

Notary is a project that allows anyone to have trust over arbitrary collections of data
Apache License 2.0
3.24k stars 510 forks source link

Threshold signing validation #1058

Open ruimarinho opened 7 years ago

ruimarinho commented 7 years ago

I would like to have a better understanding how threshold signing works. In principle:

Questions:

  1. Is the use of roles correct here? E.g. would it make more sense to rename targets/<username> to <targets/deploy>?
  2. How does one add a new signature to an image? Using docker push?
  3. How do I prevent Docker to run an image without the required signatures on each step? Right now, docker pull appears to skip it (DEBU[0000] skipping targets/qa because there is no checksum for it).
ecordell commented 7 years ago

Hey @ruimarinho, I think I can help with a couple of these.

Let me start by saying that currently, threshold signing is not supported in notary. A lot of the groundwork exists (and you'll see references to thresholds throughout the code) but there's currently not a way to do threshold signing as currently defined in TUF. https://github.com/docker/notary/issues/841 is a discussion around some of these issues (as an aside, I have a local branch that can do some threshold signing but that is pending some other merges before I focus on cleaning it up).

With that in mind, the scenario you're describing is actually not currently supported by notary or the TUF spec. Please see TAP3 for a (currently accepted, but pending some final changes) proposal for multiple-role threshold signing support.

The main problem with what you describe according to how TUF currently works is that everything revolves around finding a "target" file (in this case an image reference). Once a TUF client finds the target file in metadata, and decides that it trusts the signatures for it, it's done. That means that in your scenario, a TUF client would find the target in targets/ci, trust the signatures since they're chained to the root, and call it a day. It doesn't care that the target might also be found in other metadata.

"Threshold" support as discussed in #841 means verifying a threshold of signatures within a single role. This would mean that you could have a targets/production role that has a ci key, a staging key, a sandbox key, and a user key, and with a 4/4 threshold that would be roughly analogous to the scenario you described. In that case, a production box would only see valid metadata if there were valid signatures corresponding to those keys. Unfortunately, this breaks down if:

Thresholds defined in metadata are required to be met in order for the metadata to be considered valid, and the target listed to be trusted and downloaded. That means that the TAP I mentioned above is probably not what you want - more than likely, you want something like the staged verification described above. You can do this today manually by creating multiple repositories that store the same target, and verifying the target exists in each of the repositories. Don't worry there's a TAP4 for that too.

I think I addressed all of your questions but let me know if I should clarify something.

ruimarinho commented 7 years ago

Thank you for such a thorough response. My initial line of thinking was "staged verifications", although my understanding is that the thresholds feature would create the working ground for that.

Your suggestion is creating multiple repositories, such as example.com/app-ci:1.0.0 (deploy in staging), example.com/app-staging:1.0.0 (deploy in sandbox), example.com/app-sandbox:1.0.0, deploy in production?

I have a couple of questions about this approach:

ecordell commented 7 years ago

My initial line of thinking was "staged verifications", although my understanding is that the thresholds feature would create the working ground for that.

Thresholds can be used for that as long as you don't also need to have multiple keys for each stage. i.e. if you need 2/3 QA keys to sign, but also want a CI key and a User key to sign, then having a simple threshold of 4/5 won't represent the policy you're trying to implement.

I don't yet understand how can I force a docker pull instruction to verify that a given target exists - I guess for now docker would always check for targets/releases on each repository, which in this case means different things (being able to docker pull example.com/app-sandbox:1.0.0 would mean there is a signed image whose key has been delegated to the sandbox client);

Here's what targets metadata looks like for a docker image:

{
    "signed": {
        "_type": "Targets",
        "delegations": {
            "keys": {},
            "roles": []
        },
        "expires": "2019-12-07T16:03:44.426151186-05:00",
        "targets": {
            "latest": {
                "hashes": {
                    "sha256": "mysha"
                },
                "length": 6837
            }
        },
        "version": 2
    },
    "signatures": [{
        "keyid": "my_kid",
        "method": "ecdsa",
        "sig": "my_signature"
    }]
}

Pulling an image with docker client with DCT enabled looks for the tag you're pulling in the targets metadata and then pulls the manifest by the hash specified (assuming the metadata validates). After that everything else is content addressable, so verifying the top-level hash is all that's needed.

You would need to set up these collections up in notary directly. If you're willing to do the work for that:

The reason you need multiple repositories is that there is no current way to tell notary to verify that a target exists in multiple delegated target metadata, and there's no way to tell the docker client to pay attention to any target delegation aside from targets/releases.

One way to clean this up might be to have an additional "production ready" repository, which uses normal DCT with the normal Docker client, but you only push to it once all of the other key checks are done (this would let your production boxes do a simple docker pull and leave the key management to others)

Maybe someone from the notary team will have an alternate solution for you, but I believe this is the only way to do what you want today.

ruimarinho commented 7 years ago

With the amount of image tags we produce, I think the process would be too error-prone for now. I might explore this concept purely for an academic approach so can I get accustomed to notary. Thanks for sharing all this information.

endophage commented 7 years ago

@ecordell an interesting use case I'm seeing here that also isn't solved with spec compliant thresholding is that currently thresholds are defined by the role. In @ruimarinho's case he wants thresholds defined by the environment. Even if we only have one key per signing category (ci, qa, etc...), staging might have a 2/4 threshold, while prod would want a 4/4 threshold. That conceptually can't be represented in TUF.

With that in mind, the signing policy implemented in Docker Data Center is an implementation (at least in spirit and goal) of TAP3 but with the definition of the threshold moved to the consumer, rather than signed into the repo.

The more I've thought about this, I find these 2 different locations of thresholding (consumer vs signed into repo) solve different use cases, regardless of whether it's single or multi role/delegation. Thresholds signed into the repo define a minimum level of trust for a target and should always be observed. Thresholds defined by a consumer layer on top of that and define process/organizational levels of trust, enabling things like CI, QA, staging, and production to define their own increasingly stringent trust levels.

As an example (and assume there is a single delegation for each step in the pipeline):

I don't think it would make sense to sign the CI, QA, staging, and production threshold requirements into the repo. I control those machines and can distribute that configuration to them as securely as I can the seeding public keys they should trust for the TUF repo.

If the devs delegation is updated with new targets, nobody will trust them until the delegation has 2 signatures, and I think that remains an appropriate thresholding scope to maintain within the TUF repo as signed metadata. TAP3 may make sense, I'm still undecided, but I think it does a very poor job of solving @ruimarinho's particular use case here.

ecordell commented 7 years ago

@endophage You're correct, and the difference between client enforcement and publisher enforcement is the primary conceptual difference between TAP3 and TAP4. TAP4 addresses that use case as well as information hiding and trust pinning.

The idea would be to expose targets metadata under different repositories that contain the relevant thresholds and signatures. All the client needs to know is how to talk to the right repository, which would be convention-based and app-specific (e.g. ../trust/_tuf/CI, ../trust/_tuf/QA). The TAP also addresses ordering, fallbacks, and update behavior. It's kind of cool from an operational perspective: the dev team can have their own repos, the CI system can have its own, the QA team can have their own, and staging/prod can pick and choose which it cares about.

It looks like this is the only real mention of this behavior in the current TAP4:

This TAP also discusses how multiple repositories with separate roots of trust can be required to sign off on the same target, effectively creating an AND relation.

I definitely think this behavior should probably be explicitly called out as a use case (it was one of the main reasons we started working on it!)

Is the client part of DDC going to upstream into notary? It seems like that would be a great place to start with TAP4 support?

TAP3 is still useful IMO - but for case where the repository owner wishes to enforce signature thresholds for all clients. TAP3 feels more like a feature for public images, TAP4 for private.

Aside: Choosing which signatures to care about could almost come for free when implementing full witnessing, depending on some of the implementation choices?

endophage commented 7 years ago

TAP4 is unnecessarily complicated for this use case. It makes some sense in an organizationally distributed scenario where for example, you want the Django devs and PyPI to concur on a target, those groups being distinct with no requisite reporting channels between them. I also still don't think it makes sense as part of the TUF spec as it defines a way to use a TUF repo or multiples thereof, not a structure of the repo itself. It falls under the same area as how to prioritize delegations, not defined in the spec but possible options are mentioned; it should be a white paper/use case study.

In this case however, all actors sit within a single, non-distributed, organizational hierarchy and it's procedurally useful and desirable to allow them to all refer to the same repository name (and having some kind of manual/automatic repo naming scheme just opens you up for abuse via accidental or intentional name shadowing). It's also conceptually useful for user sanity to have a 1:1 mapping of artifact:tuf repo. In this case, the use of delegations within a single repo achieves the desired use case with much less complexity.

Keeping complexity down is critical to getting people to actually use security products. Why don't people widely use GPG? It's a pain in the ass, that's why. Quoting Ray Ozzie: "Complexity kills. It sucks the life out of developers, it makes products difficult to plan, build and test, it introduces security challenges, and it causes end-user and administrator frustration."

ruimarinho commented 7 years ago

I don't yet understand all the concepts behind TAP3/TAP4 but this line from @endophage sums a possible workflow on my organization:

prod would require devs AND CI AND QA AND staging at 2/N, 1/N, 2/N, and 1/N respectively.

Except devs (or release managers) would sign-off at the end of the cycle (CI -> QA -> staging). Keeping the process simple would allow release managers (not necessarily with a strong technical background) to sign off on a release to production.

ecordell commented 7 years ago

TAP4 is unnecessarily complicated for this use case.

I think this is valid criticism. Here's my reasoning for why TAP4 is not actually so complicated:

I think what makes TAP4 feel complex is that it discusses multiple repos for what seems like could be handled with only one (for this use case). But multiple repos permit a single-repo abstraction on the server, and a single-repo abstraction on the client, while also actually allowing separate repos if desired.

I also still don't think it makes sense as part of the TUF spec as it defines a way to use a TUF repo or multiples thereof, not a structure of the repo itself.

On one hand, I agree in that this can be implemented without modifying TUF directly. On the other hand, I think the above use cases are reasonable and common enough to be worth including in a spec: one (un?)stated goal of TUF is that you don't need a fancy client to transport metadata, and any compliant client can parse it. This is why I'm for things like codifying generic things like an AND relation, but against things like codifying how you represent a signing group as a repo.

In this case however, all actors sit within a single, non-distributed, organizational hierarchy and it's procedurally useful and desirable to allow them to all refer to the same repository name (and having some kind of manual/automatic repo naming scheme just opens you up for abuse via accidental or intentional name shadowing).

I wasn't suggesting users would interact with different repos directly or that the groups would be reflected in a GUN (no name shadowing issues), instead there would be different URLs to pull the different sets of metadata from. This would all be supported with simple tooling (just as notary wraps other types of interaction with TUF repositories). Just spitballing, I'm thinking along the lines of notary lookup libary/ubuntu latest --groups=CI,QA or notary pin library/ubuntu.

It makes some sense in an organizationally distributed scenario where for example, you want the Django devs and PyPI to concur on a target, those groups being distinct with no requisite reporting channels between them.

I would note that large organizations can look very similar to this internally :)

I 100% agree with you about keeping complexity down - I happen to think TAP4 does a decent job of that. Am I convincing or do you think there's a better way?

I should be clear, though: I think that a simple way for the client to decide which signatures it cares about is definitely needed for witnessing anyway. But I like the idea of keeping witnessing cordoned off in a "you maybe shouldn't trust all of this just yet" area of notary, and using TAP4's map file for "here's the signatures you should care about to decide if the targets are valid".

endophage commented 7 years ago
  • It keeps the TUF mechanism the same; TAP4 is essentially a wrapper around existing TUF

This is not unique vs the way we've done it with delegations. We've simply implemented a prioritized walking of delegations with a consensus mechanism. The TUF spec explicitly does not define how to find a target, only how to get the complete repo.

  • It supports this use case (multiple signing groups, client chooses validation) while simultaneously solving others:
    • splitting trust across repositories

Splitting trust is unnecessary complexity for this use case. Just because TAP4 is one solution, doesn't mean it's the right one.

  • pinning existing keys (map file can replace trustPinConfig)

Just because it can replace something doesn't mean it should. It's also worth noting that you'd only want to pin keys once, not redefine the pinning redundantly if you used the same repo, say PyPI, in multiple consensus configurations.

  • hiding targets from unauthorized users

Contradicted by your next bullet:

  • For this use case, if desired, the root, snapshot, and targets files could be re-used between repos, so the server changes would be minimal (mainly picking url->group repo mappings)

Not in tandem with hiding targets. It's necessary for security that a client can download all delegations registered in the snapshot. Failure to download a delegation that's recorded in the snapshot, and still signed in as a valid delegation in the targets tree, may indicate an attack attempting to prevent you from a) seeing the highest priority version of the target, or b) see that there is a new version of the target.

  • The client gets a config file that explicitly states where signatures are coming from, which means debugging is easier and creating a duplicate client elsewhere is easy.

This is only necessary because you're doing multi-repo. You can't be self-referential in justifying a feature. TUF already has an expectation that you know where a signature is coming from through 2 implicit properties:

  1. There is necessarily a location that the repo is being read from; an API, local disk, etc...
  2. That location necessarily provides some kind of naming convention for the repo; a unique URL path, a file path, etc...
  • Because it's a wrapper around the client process, well-tested existing TUF code can remain unchanged.

This is a redundant restatement of your first bullet in a different form.

I also think it's really important to note that all your examples of how its not so complex are based on consumers. It's horribly complex for the administrators and depending on how much of your single-repo abstraction would actually be built, for us, the developers, and that complexity is going to result in unintentional bugs and worse, potentially security holes. Administrators are really the problem though. PyPI (by way of example) doesn't have a problem that nobody is checking signatures, they have a problem that nobody is producing signatures (only ~5% of all packages are signed, and afaik none of the major ones are).

If you want to convince me, get users. We've released signing policy in a production version of DDC and people are using it. So far, nobody is asking for structural feature changes. We solved an actual use case.

At the moment TAP4 solves a theoretical use case. It's easy to come up with features for products. It's brutally difficult to identify which are necessary, and even then, it can take multiple iterations to get the implementation right. TAP4 is putting the cart before the horse. There should be some degree of experimentation before any attempt at formalization is made. The core TUF spec has benefitted from years of experimentation across the myriad of package management and signing systems that already exist, and has explicitly built upon the work of others (thandy).

I'd suggest that before even contemplating TAP4 in its entirety, it could be broken into smaller chunks of functionality which we do know would ease pain points.

Why don't we start with those, which form the basis of functionality of TAP4. With them in place, it becomes relatively easy to include an experiment (if people are actually asking for it) on consensus allowing people to configure multiple location names to a (wildcarded) target.

p.s. if you've ever worked at a company where functions along the same development pipeline have no accountability to each other, that's a very dysfunctional company. The QA team shouldn't get chewed out for not testing something if dev never delivered it.

ecordell commented 7 years ago

I'd like to attempt to summarize your concerns (please correct me if I'm off-base) and address them in a particular order.

Concern 1

TAP4 should not include checks against multiple repositories and should be broken down into smaller pieces.

I'm not against this, but it's worth mentioning that the original ideas for TAP4 were already broken down into TAP4 and TAP5. TAP5 requires metadata format changes which is why I left it out of the above discussion of what TAP4 "can" do.

Concern 2

TAP4 is operationally complex for admins (both of individual repositories and of TUF servers).

Earlier I restricted the discussion to TAP4. I'd like to paint a nicer picture for this scenario that is possible when using TAP4 in conjunction with TAP5:

{
  "repositories": {
    "library/ubuntu::ci": ["https://docker.io/"],
    "library/ubuntu::qa": ["https://docker.io/"],
    "DockerHub":   ["https://docker.io/"]
  },
  "mapping": [
    {
      "paths":        ["library/ubuntu*"],
      "repositories": ["library/ubuntu::ci", "library/ubuntu::qa"],
      "terminating":  true,
    },
    {
      "paths":        ["*"],
      "repositories": ["DockerHub"]
    }
  ]
}
{
  "signed": {
    "roles": {
      "root": { .. },
      "timestamp": {...},
      "snapshot": {...},
      "targets": {
         "URLs": ["https://docker.io/library/ubuntu/_trust/tuf/targets/ci.json"],
         "keyids": [...],
        ...
      },
      ...
    },
    ...
  },
  ...
}

A client with this configuration will pull metadata from docker hub, but verify signatures against one delegation at a time (only those specifically requested by the user). This specific example instructs a client to verify the CI and QA roles, held in separate delegations, for library/ubuntu

No modifications the the notary server are required whatsoever for this to work; the only changes are those of TAP4 and TAP5 in the client, which are straightforward to implement.

Concern 3

No one needs this and/or it can be solved without modification of TUF.

Docker and Notary are not the only users of TUF. In particular the multiple repository checks are needed by the Uptane project.

Given that the TAPs could simplify some of what I've been working on, they support Uptane requirements, they could consolidate two other features in Notary into one spec-ed one, they offer a simple solution to @ruimarinho's problems, and they could be applied successfully to things like the maximum security model described for PyPI, I'm personally convinced that they are of general utility to TUF users. I'd emphasize that TAP4 fills the gap of the "AND" relation in TUF, and that the "OR" relation is already in the spec (delegation walking - it is also my understanding that the currently unspecified aspects of that were never intended to remain that way).

...

I apologize if it comes off like I'm trying to be obstinate here (I'm not!). I didn't expect there to be objections on grounds of complexity (backwards compatibility, absolutely, but I think that both can be reasonably made backwards compatible). I remember thinking TUF seemed overly complex on first reading, but when it comes down to it it's "just" some keys signing off on some other keys with some decision-making around which ones to care about, which is also what these TAPs concern themselves with.

Again, please point out if I've misrepresented your opinions, I wanted to focus on what it seemed like the core of your concern was. And if I'm just missing it entirely I'm happy to jump into a realtime discussion to hopefully understand better.

endophage commented 7 years ago

I disagree with the fundamental assumption that TAP4 is a proven useful feature, feel that there should be some actual code and in the wild use to hone a concept before even thinking about formalizing it, and finally, still believe it does not belong as part of the core TUF spec (I consider it appropriate to a white paper or use case study) as it addresses client business logic on how to use the data TUF has securely delivered to you.

I think it is easy to academically justify most proposed features in any software, and present very reasonable scenarios in which it might be used. Everything you say regarding TAP4 and TAP5 is logical and factually correct within the scenarios you present. I'm making a value statement. I know, from my own experience, that our familiarity with the TUF means we less easily see the complexity in it. TAP4 is a very complex feature and while potential users may say "sure, that sounds interesting", I strongly believe they will find it difficult to use and will eventually give up on using it, because it's "cool" not "critical".

I'd be interested to know why exactly the uptane project needs TAP4. I don't immediately see a meaningful use case. If the uptane use case only requires that target names can be mapped to single locations/repos, that is substantially less than the full proposal described in TAP4, and represents a relatively trivial piece of client configuration, not significant enough, in my opinion, for a TAP.