Closed dims closed 2 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
I guess one last thing I'd like to ensure before we call this done is make sure the promoter jobs are running in community-owned infra (re: https://github.com/kubernetes/k8s.io/issues/157#issuecomment-465755277)
I guess one last thing I'd like to ensure before we call this done is make sure the promoter jobs are running in community-owned infra (re: #157 (comment))
Then I think this will be of interest to you.
So, at the moment the Image Promoter for the Kubernetes E2E test images is currently using a few Windows build nodes to build the Windows images, those nodes are in Azure on the CNCF subscription, the same one that is being used to run the sig-windows test jobs.
There was a suggestion to not rely on them anymore, to try to build all the images on the Image Builder node. I have managed to do that with docker buildx
and no Windows build nodes, and I've had all the Conformance tests pass with the images built with docker buildx
. You can see the proposed PR for this here: [1]. However, there are a few compromises that are to be made for that to be possible. Please read the PR description for more information on that, approval or feedback on that will be greatly appreciated. In any case, this PR depends on this PR [2], which changes the Windows base image from servercore to nanoserver, which is 10x smaller in size. The base image switch was a success, and the nanoserver-based image are currently being used by all the sig-windows test jobs and are passing: [3]
[1] https://github.com/kubernetes/kubernetes/pull/93889 [2] https://github.com/kubernetes/kubernetes/pull/89425 [3] https://testgrid.k8s.io/sig-windows-sac#aks-engine-azure-windows-1909-master
From discussion during most recent meeting, items that are preventing us from closing this out:
/assign @justaugustus @thockin Seems like there was discussion in #wg-k8s-infra about other things to finalize this (from https://kubernetes.slack.com/archives/CCK68P2Q2/p1603988393101900 onward)
Please update this issue accordingly
/sig release /area release-eng
@justaugustus Ball is in your court, IMO. I'd love to do a test as CIP exists, make sure the runbook and whatever docs are up to snuff.
On Thu, Oct 29, 2020 at 1:59 PM Aaron Crickenberger < notifications@github.com> wrote:
/sig release /area release-eng
— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/k8s.io/issues/157#issuecomment-719018352, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABKWAVERUUFUBT3ZQ6S4363SNHJRZANCNFSM4GIKZDGA .
I've been poking around and I just wanted to come back and checklist-ify what @spiffxp mentioned above:
Everything below I think is additive and not strictly required to close this issue out.
As I learn more, I'll add a SIG Release tracking issue with additional things we may need to think about like:
From @thockin in #wg-k8s-infra:
Hi all! Now that promoter is humming, do we think it would be worthwhile to consider moving the k8s.gcr.io directory to its own repo? Would make it easier to segment email and stuff
cc: @kubernetes/release-engineering
Another task that fell out of review in https://github.com/kubernetes/k8s.io/pull/1392:
Added to the checklist above.
I'm assuming this remains unfinished given the issues linked in https://github.com/kubernetes/k8s.io/issues/157#issuecomment-721397190 remain open
/milestone v1.21
/milestone v1.22
/milestone v1.23 I'm moving to Blocked as I'm not sure what the status of this is anymore
/milestone clear Clearing from milestone, I'm not sure what remains to be done
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
I think we are done with this. Image promotion is now part of the release process and use by the different SIGs and subprojects.
Thank you everyone for the work done! /close
@ameukam: Closing this issue.
Split from https://github.com/kubernetes/k8s.io/issues/153 (see that for some context)
cc @javier-b-perez @mkumatag @listx