Closed justaugustus closed 2 years ago
@justaugustus: The label(s) area/release-eng, area/artifacts
cannot be applied, because the repository doesn't have them
IMO having a dedicated repo for this is a win, even if just for better notifications
even if just for better notifications
My kingdom for more focused notifications!
Approval request sent to SIG Arch ML: https://groups.google.com/g/kubernetes-sig-architecture/c/3HsF1zF0aP4
+1 in principle, will look deeper early next week
To confirm my understanding - is the idea that we will move the image promoter manifests that are current in kubernetes/k8s.io into this new repo? (e.g. https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io/images ) Or will this be reserved only for "core" artifacts, and "kubernetes-sigs" artifacts will be managed elsewhere?
@justinsb -- so the repo would be "core" in the sense that it's in the kubernetes
GH org, but it would contain the image/file configs for all staging projects.
Everything under:
I agree this would be better for notifications. However, I am concerned the split of artifact project ACL and provisioning in k8s.io may lead to things falling stale. Can you articulate more of the intended cross-repo workflow and validation?
I agree this would be better for notifications. However, I am concerned the split of artifact project ACL and provisioning in k8s.io may lead to things falling stale. Can you articulate more of the intended cross-repo workflow and validation?
Hey @spiffxp, sorry for missing your comment a while back. My understanding of the process for creating staging projects as it exists today (having gone through it a few times myself, with custom use cases for RelEng) is:
While those happen in the same PR today, we could disambiguate concerns...
Example workflow:
You mentioned staleness, which suggests validation, which suggests tooling. However, that doesn't exist today and I don't necessarily think we need to block on it.
So what I'm thinking is:
Once the PR is submitted to k/artifacts to create a new promotion directory, a presubmit checks to see if the IAM group exists (store a listing of groups in a yaml file? we already do this). If the group doesn't exist, the PR doesn't merge (and we can throw a bot warning back to docs to guide the user).
Thoughts?
Slack discussion thread: https://kubernetes.slack.com/archives/CJH2GBF7Y/p1612214966104300
Requested name for new repository https://github.com/justaugustus/artifacts
lol? π (presumably just the "artifacts" bit? or is this a takeover? π )
I think notifications will continue to be bad, FWIW (seeing as everyone involved in artifact management will still be in the one repo, and that's easily the most active thing), and every move like this kills link juice sooo ... π€·ββοΈ
/milestone v1.21
Thoughts?
One nit on proposed workflow: I would prefer treating service as source of truth for gitops driven things, especially those that are manually actuated. So, k8s.gcr.io jobs should check to see if projects/images actually exist, don't trust presence in a .yaml file.
Workflow otherwise SGTM. Block manifest merge on "is manifest valid", which includes existence of source/dest artifacts and appropriate iam.
You mentioned staleness, which suggests validation, which suggests tooling. However, that doesn't exist today and I don't necessarily think we need to block on it.
I think it's fair to ensure we're not locking ourselves into today's no-tooling state forever. I haven't noticed a proposal or design doc, so wanted some reassurance that this had been thought through. That's been met, though I'd still appreciate a pointer to a doc if I've missed it.
/milestone v1.22 I don't think we're going to land this in v1.21
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
/milestone clear
/remove-lifecycle rotten ref: https://github.com/kubernetes/community/pull/5928
/priority important-longterm
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
New Repo, Staging Repo, or migrate existing
Migrate existing: https://github.com/justaugustus/artifacts
Requested name for new repository
https://github.com/kubernetes/artifacts
Which Organization should it reside
kubernetes
If not a staging repo, who should have admin access
ref: https://github.com/justaugustus/artifacts/blob/ba6bd79ce72ddcca7fb60c89e8e2612945761989/OWNERS_ALIASES#L5-L13
If not a staging repo, who should have write access
ref: https://github.com/justaugustus/artifacts/blob/ba6bd79ce72ddcca7fb60c89e8e2612945761989/OWNERS_ALIASES#L14-L22
If not a staging repo, who should be listed as approvers in OWNERS
Already configured: https://github.com/justaugustus/artifacts/pull/1
If not a staging repo, who should be listed in SECURITY_CONTACTS
Already configured: https://github.com/justaugustus/artifacts/pull/1
What should the repo description be
Already set:
Kubernetes artifact promotion configurations
What SIG and subproject does this fall under in sigs.yaml
This is part of the @kubernetes/release-engineering subproject of @kubernetes/sig-release.
Approvals
This is a core repository which will require approval from @kubernetes/sig-architecture-leads.
I'm opening this to start the approvals process and will follow-up with a note to SIG Arch's ML.
Additional context for request
Initially suggested by @thockin in https://kubernetes.slack.com/archives/CCK68P2Q2/p1603987072094400.
tl;dr of that conversation was as we wrap up the "first phase" of the image promotion process (https://github.com/kubernetes/k8s.io/issues/157), we can consider "the keys" transferred when senior Release Managers can manage artifact promotion and handle incidents.
As part of that, we should shift the artifact promotion configurations over to Release Engineering and establish a new repo for that.
For SIG Arch: /assign @dims @johnbelamaric @derekwaynecarr
For GH Administration: /assign @nikhita @mrbobbytables
cc: @kubernetes/sig-release-leads /sig release architecture /area release-eng artifacts /wg k8s-infra