Open tpepper opened 5 years ago
As a kubeadm maintainer, this would make my job of validating releases much easier.
@liztio you can use the bazel tooling to generate debs and rpms locally today. What are the gaps between the spec files there and the ones here?
cc: @timothysc
I can, but it's time consuming and slow. I have to use a separate machine because bazel doesn't support cross compilation from OSX yet. Plus we've had discrepancies in the bazel / shell release process debs before.
The specs in kubernetes/release/debian and kubernetes/kubernetes/build/debs have drifted a bit, and someone should probably unify them again. I'm not sure exactly how they differ.
Also, the "official" debs (using kubernetes/release/debian) use the outputs from the non-bazel builds, which may be different. (One notable issue is that the bazel-built debs only support linux/amd64.)
~I just built packages for kubeadm and kubelet on the v1.12.0-rc.1
branch, but the deb files and binaries came out labelled as v1.13.0-alpha.0.1342+cdadc117e1ea8d
. Maybe there's some way to override this, but it's definitely not as simple as bazel build //build/debs:kubeadm
~
this was my mistake, ignore me!
@liztio did you build at HEAD
rather than at the v1.12.0-rc.1
tag? Bazel (and make) basically uses git describe --tags
to produce the version string, so it'll only use v1.12.0-rc.1
if you have checked out that tag explicitly.
@ixdy I get the same results when building from that tag, but I also use make not bazel.
Also I consider it onerous to force everyone to rebuild from tag. Ideally we'd like to get real beta-testers in the wild.
@ixdy ah my bad. I didn't re-checkout when I moved from my local machine to a remote linux box. Ignore me!
@timothysc to be clear, I do agree with you. we really do need to integrate rpm/deb building with the release process.
Also -beta and -rc debs/rpms ought to go into a different repo than the official release ones.
note from @calebamiles
./jenkins.sh --kube-version 1.12.0 --distros xenial` (debs)
./docker-build.sh` (rpms)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
@tpepper still valid?
/remove-lifecycle stale
/help /milestone v1.15 /priority important-soon /area release-eng
/help /milestone v1.15 /priority important-soon /area release-eng
This is in my queue.
/assign @tpepper
/remove-help
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
/milestone v1.17
@listx: You must be a member of the kubernetes/milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your and have them propose you as an additional delegate for this responsibility.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
/remove-lifecycle rotten /lifecycle frozen /milestone v1.22 /assign /unassign @tpepper /reopen
Part of https://github.com/kubernetes/sig-release/issues/1372.
@justaugustus: Reopened this issue.
Today we have a release step of pinging @mbohlool or @jpbetz on Slack indicating that RPMs/Debs need built for a particular "1.X.y-{beta|rc}". This needs automated. Similarly once, built folks have manually discovered build issues, since the artifacts don't have pre-publication automated validation. This all needs automated to tighten up and improve the build process. Ideally we'd turn the full release crank for all of alpha/beta/rc and final releases in order to always know the process is healthy.