git-wotr / spec

Securely review git commits
Other
3 stars 3 forks source link

Configuration Format #5

Open lrvick opened 5 years ago

lrvick commented 5 years ago

This I don't have a proposal on other than what I did for https://github.com/hashbang/git-signatures which is a .gitsigners file that declares all the trusted keys for that repo that a tool can validate against.

While we are making standards, it might be a good idea to decide this too.

The current solution I am using is a ".gitsigners" file. Only the .gitsigners file that is currently in master is trusted (thus must do code review, and signing process to get new trusted signers put in).

Example: https://github.com/hashbang/git-signatures/blob/master/.gitsigners

This is a stupidly simple setup where just one whitelisted signing key per line. It does not have a concept of recommended number if signatures expected for trust on this repo, or defining different signing roles such as maintainer, ci, etc. We have support in git-signatures today for validating against different trustdbs, but it would be helpful to have indication on how to build those trustdbs on the fly from groupings of public keys in a configuration file in the root of the repo at a minimum.

Suddenly this becomes a json vs something else debate again as potentially we want to include more future data about signers, or default verification modes expected for a fiven repo. We could also maybe take hints from .gitmodules file and format it like that (but harder to parse).

Would love others thoughts on this in general!

Ekleog commented 5 years ago

In my (current) opinion the list of keys to be trusted should not necessarily go into the repository, but be a per-(user,repo) configuration, so that users can decide to trust reviewers not approved by upstream and to distrust reviewers approved by upstream.

Now, we do agree there is a need for a .gitsigners-equivalent format: we do want upstreams to be able to tell who are supposed to be the trusted reviewers.

It does not have a concept of recommended number if signatures expected for trust on this repo, or defining different signing roles such as maintainer, ci, etc

I don't think we should split signing roles between maintainers and CI, but that's also because I think CI should not sign anyway, as they do not perform any code review (and the point of review signatures is… well… to review)

In my opinion, the things we need to be able to configure is:

I think before we can agree on this one we should first decide on https://github.com/git-wotr/spec/issues/9, though, because it'll dictate which data go into the configuration file(s) :)

lrvick commented 5 years ago

I don't think we should split signing roles between maintainers and CI, but that's also because I think CI should not sign anyway, as they do not perform any code review (and the point of review signatures is… well… to review)

Most orgs I am aware of use CI systems as part of the review. Normally they are the -first- reviewers in effect. If the bots don't yet approve the code, then humans don't bother looking at it until they do. Running extended end to end test suites, monkey testing, deterministic build testing, checking for known CVEs, etc etc. For those of us that have to follow strict guidelines like SOC2 being able to show auditors evidence that a given bastion of tests were done to a given set of code before every release is pretty valuable.

My particular use case would be that production systems would be configured to refuse to run code unless it can validate signatures indicating it has been through all of the usual checks and balances: e.g: been signed by an engineer, 2 CI servers, 1 reviewer, 1 member of release engineering team OR signed by 2 members of the production engineering team (emergency hotfix)

Production systems could always obay verification rules in the previously cached version of a configuration file in a given repo, and refuse to accept new changes to it unless they are signed according to the previous rules.

Ekleog commented 5 years ago

Most orgs I am aware of use CI systems as part of the review. Normally they are the -first- reviewers in effect.

Well, all big projects I currently have in mind and for which I know the procedure don't:

being able to show auditors evidence that a given bastion of tests were done to a given set of code before every release is pretty valuable.

Well, just having CI run the tests during build and refusing to sign the artifact if tests don't pass would handle this case correctly.

Now, you're right my previous message was a bit strong. There are advantages of having CI servers signing commits: for instance, knowing at which commits the test suite was run, even when it's not on a release. This could be done using a state commit signature (as the CI would be testing the whole state, not only a diff).

Also, now that you've added your example (quoted just below), I understand what you meant by splitting roles between maintainers and CIs, as it also includes other kinds of people, and I no longer disagree with it :)

been signed by an engineer, 2 CI servers, 1 reviewer, 1 member of release engineering team OR signed by 2 members of the production engineering team (emergency hotfix)

Can you share thoughts on how to do that on #9? It sounds like the current scheme defined there wouldn't work for your use case.

I suggest most of this discussion continues on #9, and this issue becomes only for the actual on-disk format of configuration files (which is thus currently blocked on #9). Would you agree with that?