Open sdake opened 7 years ago
What does gating mean?
There were several issues with the handling of the issues in 1.6.0. rpms treat release-0 as less then release-0.alpha. final releases should be release-1 rather then fix 1, the solution used was to delete the alpha rpm. That doesn't work for several reasons:
1.5.x was stable, 1.6.0 was just released, and as common, x.0 was buggy. So operators need a way to pin back to 1.5.x until 1.6.1 is released.
@mikedanese Some kind of automated job that blocks release until the artefacts are tested for functionality.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
I think this was addressed in the last few releases, wasn't it?
$ repoquery --disablerepo='*' --enablerepo='kubernetes' --show-duplicates -q 'kubelet*'
kubelet-0:1.5.4-0.x86_64
kubelet-0:1.5.4-1.x86_64
kubelet-0:1.6.0-0.x86_64
kubelet-0:1.6.0-1.x86_64
kubelet-0:1.6.1-0.x86_64
kubelet-0:1.6.1-1.x86_64
kubelet-0:1.6.2-0.x86_64
kubelet-0:1.6.2-1.x86_64
kubelet-0:1.6.3-0.x86_64
kubelet-0:1.6.3-1.x86_64
kubelet-0:1.6.4-0.x86_64
kubelet-0:1.6.4-1.x86_64
kubelet-0:1.6.5-0.x86_64
kubelet-0:1.6.5-1.x86_64
kubelet-0:1.6.6-0.x86_64
kubelet-0:1.6.6-1.x86_64
kubelet-0:1.6.7-0.x86_64
kubelet-0:1.6.7-1.x86_64
kubelet-0:1.6.8-0.x86_64
kubelet-0:1.6.8-1.x86_64
kubelet-0:1.6.9-0.x86_64
kubelet-0:1.6.9-1.x86_64
kubelet-0:1.6.10-0.x86_64
kubelet-0:1.6.10-1.x86_64
kubelet-0:1.6.11-0.x86_64
kubelet-0:1.6.11-1.x86_64
kubelet-0:1.6.12-0.x86_64
kubelet-0:1.6.12-1.x86_64
kubelet-0:1.6.13-0.x86_64
kubelet-0:1.6.13-1.x86_64
kubelet-0:1.7.0-0.x86_64
kubelet-0:1.7.0-1.x86_64
kubelet-0:1.7.1-0.x86_64
kubelet-0:1.7.1-1.x86_64
kubelet-0:1.7.2-0.x86_64
kubelet-0:1.7.2-1.x86_64
kubelet-0:1.7.3-1.x86_64
kubelet-0:1.7.3-2.x86_64
kubelet-0:1.7.4-0.x86_64
kubelet-0:1.7.4-1.x86_64
kubelet-0:1.7.5-0.x86_64
kubelet-0:1.7.5-1.x86_64
kubelet-0:1.7.6-1.x86_64
kubelet-0:1.7.6-2.x86_64
kubelet-0:1.7.7-1.x86_64
kubelet-0:1.7.7-2.x86_64
kubelet-0:1.7.8-1.x86_64
kubelet-0:1.7.8-2.x86_64
kubelet-0:1.7.9-0.x86_64
kubelet-0:1.7.9-1.x86_64
kubelet-0:1.7.10-0.x86_64
kubelet-0:1.7.10-1.x86_64
kubelet-0:1.7.11-0.x86_64
kubelet-0:1.7.11-1.x86_64
kubelet-0:1.8.0-0.x86_64
kubelet-0:1.8.0-1.x86_64
kubelet-0:1.8.1-0.x86_64
kubelet-0:1.8.1-1.x86_64
kubelet-0:1.8.2-0.x86_64
kubelet-0:1.8.2-1.x86_64
kubelet-0:1.8.3-0.x86_64
kubelet-0:1.8.3-1.x86_64
kubelet-0:1.8.4-0.x86_64
kubelet-0:1.8.4-1.x86_64
kubelet-0:1.8.5-0.x86_64
kubelet-0:1.8.5-1.x86_64
kubelet-0:1.8.6-0.x86_64
kubelet-0:1.9.0-0.x86_64
@gtirloni I can't say for sure. The basic issue I filed was that kubelet and friends are not run inside any type of CI against the built RPMs. The fact that the RPMs are present (i.e. history is not lost) is a bit orthogonal to the actual issue. However, some folks did pile on and indicate the lack of historic RPMs was a real problem for the general kubernetes-consuming community.
That said, every time I see an rpm with -0 in the tag, I cry a little inside. Packages should never have x.y.z-alpha where alpha == 0. This is a historic holdover to general problems with RPM. @kfox1111 could add more detail.
Cheers -steve
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale
/remove-lifecycle rotten
/lifecycle frozen
/help /milestone next /priority important-longterm /area release-eng
@justaugustus: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
@mikedanese Hello. I see that you are assigned to this issue. I'm wondering is there any update to this - would it be possible to do something about it or should we maybe unassign/reassign the issue?
We're discussing tagging/release policies in #857. If this issue is still relevant, please feel free to reopen with updated status.
/close
@justaugustus: Closing this issue.
/reopen
@justaugustus as far as I am aware, no CI testing is done against generated RPMs. The proposal in https://github.com/kubernetes/release/issues/857 does not address the unmet requirement. As such, I am re-opening this issue.
@kfox1111 in particular has more interest in this topic than I do.
@sdake: Reopened this issue.
/unassign
Hello @sdake and @kfox1111: Almost a year has passed since the last comment. What is your current view on this issue? If it's still needed, do you have bandwidth to help make a contribution that would push it along? If you need information about how to do that, please reach out here and we can offer some guidance.
For those sites that are big enough, they now mirror the repo and test on a test cluster before deploying to production.
For smaller sites, they may still point at upstream repos directly in prod and run into this issue. Its still an issue for them I think.
I've got enough clusters now I'm in the former category though. I don't currently have enough spare cycles to fix the issue when I don't personally have it. Sorry.
This repository builds kubeadm RPMs, however nothing is gated as evidenced by the chaos caused during the release of 1.6 of kubernetes. Constantly gating the built RPMS would validate their correctness.
In upstream kolla-kubernetes (http://github.com/openstack/kolla-kubernetes) we do gate the Kubernetes generated RPMs - perhaps there is something useful to be learned from our gating tools.
We have a couple work in progress patches which might also help correct some issues found in the existing packaging:
As kubeadm beta was deleted, I attempted to build from this repository what I thought were the correct RPMs. Since nothing was tagged in this repository, it was possible this was done incorrectly. Anything without "multi" in it indicates a kubeadm init only was used. A multi gate job means kubeadm join is used (which blocks on a certificate failure).
https://review.openstack.org/#/c/451556/
Kubernetes 1.6.0 https://review.openstack.org/#/c/451391/