Closed dims closed 2 years ago
From @philips on October 3, 2018 21:56
Also related @ajeddeloh has been working on a YubiKey HSM backed signing server: https://github.com/coreos/fero
From @ajeddeloh on October 3, 2018 22:7
Credit where it's due: @csssuf did the implementation, I just got it set up and deployed. It requires a pair of servers with a yubiHSM loaded with the secrets. If there's interest in using it, I can help with setup. It allows for setting thresholds such that you can specify "You need 100 points of signatures" where different users can have different weights for different secrets. I.e. you can set it up so you need 3 people to sign a release.
From @philips on October 4, 2018 23:7
@dims what is signing the apt package releases? https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl
@philips
The Google Cloud Packages Automatic Signing Key id isba07f4fb
and belongs to email id (gc-team@google.com), So i think it's setup in the Anago/GCB release harness that we cannot see. (part of the stuff we need to bring out from behind the screen when we move to CNCF)
Thanks, Dims
/sig release
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
we have to start doing this stuff this year.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
/assign @justaugustus
/area release-eng /priority important-soon /milestone v1.16
I have been thinking a lot about this problem over the last few months and would like to propose an alternative starting point to trying to do public key signing.
Instead I think we should add cryptographic digests for the files released in Kubernetes. Commonly called SHA256SUMS files they can be easily generated using the common sha256sum
tool on most systems
sha256sum * > SHA256SUMS
Alternatively, there are some release automation tools that can build these files automatically.
Besides being a useful practice for download verification I would also like to use the SHA256SUMS as a way to ensure the releases aren't tampered with and track when they are modified. There is a tool called rget that I have been building that can do this if you provide SHA256SUMS for your releases.
The rget tool also has a subcommand to make it easy to create SHA256SUMS for existing releases, just run:
rget github publish-release-sums https://github.com/etcd-io/etcd/releases/tag/v3.0.0
I would be happy to discuss this in SIG Release or any other forum to see what people think. But, it is super low risk just adding SHA256SUMS files.
I have started a similar discussion in etcd as well: https://github.com/etcd-io/maintainers/issues/16
(Moving the rget
discussion to https://github.com/kubernetes/release/issues/850.)
We're going to continue investigating this in 1.17. Some additional convo here: https://kubernetes.slack.com/archives/CCK68P2Q2/p1566403246105200
Mentioned there was:
GCP-based HSM seems like a good path to consider as we start to stand up the community-based release prod infra.
/milestone v1.17 /kind feature
Bug triage for 1.17 here. This issue has been open for a significant amount of time and since it is tagged for milestone 1.17, we want to let you know that the 1.17 code freeze is coming in less than one month on Nov. 14th. Will this issue be resolved before then?
Migrated to k/release. @josiahbjorgaard -- You can drop this from your tracking sheet.
/sig release /area release-eng /milestone v1.17 /priority important-soon /kind feature
Also, linking:
As mentioned in the post in a comment above, a modern alternative to PGP infra is Signify/Minify, used by OpenBSD.
Hi everyone,
Out of curiosity, how are k8s release artifacts signed now?
Also, an open-source alternative is The Update Framework (TUF) and in-toto, which are other CNCF projects. I'm involved in both, and am happy to discuss how to integrate them with k8s. The Datadog Agent integrations uses both to make sure that attacks anywhere between developers and end-users can be detected.
Cc @SantiagoTorres
big +1, @trishankatdatadog. I think at least signing the release tags on the kubernetes/kubernetes
repo would be low-hanging fruit to bump the security stance of the releng process:
santiago at .../kubernetes ✗ git tag --verify v1.17.2
object 59603c6e503c87169aea6106f57b9f242f64df89
type commit
tag v1.17.2
tagger Anago GCB <nobody@k8s.io> 1579389703 +0000
Kubernetes official release v1.17.2
error: no signature found
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
So, I take it this is not a concern?
As mentioned earlier, @SantiagoTorres and I are happy to consult on how to use the TUF and in-toto CNCF sibling projects to secure the building and distribution of k8s binaries...
@justaugustus Is there any update on this issue? Can we reassign it? I believe that we should move it away from the 1.18 milestone, but I don't have permission to do so.
/remove-lifecycle stale
We should be able to make some incremental progress in 1.19. @SantiagoTorres and @trishankatdatadog -- will follow up with you later in the cycle.
So, I take it this is not a concern?
Santiago, to answer your question, this is absolutely a concern to us, but there are only so many hours in the day. Our primary focus for the past few cycles has been refactoring the release tooling (https://github.com/kubernetes/release/issues/918, https://github.com/kubernetes/release/issues/852) (to enable further improvements) and migrating to community-owned infrastructure (https://github.com/kubernetes/release/issues/911, https://github.com/kubernetes/release/issues/270).
/milestone v1.19
@SantiagoTorres and @trishankatdatadog -- will follow up with you later in the cycle.
Understood: these things take time. We are happy to help, just let us know when you are ready.
Also, we are working with @joshuagl at VMware to sign all Python packages on PyPI with TUF 🙂
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
This still seems relevant. /remove-lifecycle stale
yes please , fetching all hashes for new releases is a pita. also who tells me that they are correct?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
/reopen
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
We're probably gonna go sigstore, no? @dlorenc @justaugustus
We're probably gonna go sigstore, no? @dlorenc @justaugustus
Quite possibly! Would love if you and Dan had some time to pop by a RelEng meeting to discuss :)
cc: @kubernetes/release-engineering
Quite possibly! Would love if you and Dan had some time to pop by a RelEng meeting to discuss :)
Happy to join anytime, although Dan and @lukehinds can advise better here :)
I am more than happy to jump on, is the topic marked for discussion on any particular date?
Opened a separate issue to discuss signing artifacts via cosign: https://github.com/kubernetes/release/issues/2227
This was also asked for in https://github.com/kubernetes/website/issues/30149
The initial draft of the sining KEP is proposed in https://github.com/kubernetes/enhancements/pull/3061
Closing in favor of https://github.com/kubernetes/enhancements/issues/3031 and https://github.com/kubernetes/release/issues/2227. /close
@justaugustus: Closing this issue.
From @dims on September 21, 2018 20:58
I'd like us to sign release artifacts using GPG keys. Here's how other foundations do it:
Ideally we would build a web of trust including the patch/branch managers and use keys when building the artifacts at the same time as we generate the md5 and sha1/512 manifests. We could have a signing party at kubecon to kick off the web of trust too!
Thanks, Dims
Copied from original issue: kubernetes/release#636