kubernetes / release

Release infrastructure for Kubernetes and related components
Apache License 2.0
485 stars 503 forks source link

Discuss k/release repo tagging and branching policies #857

Closed tpepper closed 3 years ago

tpepper commented 5 years ago

It is unclear what the tags mean and when they're applied, eg:

$ git tag
v0.1.0
v0.1.1
v0.1.2
v0.1.3

This lifecycle management should be documented in process. Since they're v0 I suppose it's acceptable at this point that there's no acceptance test criteria or compatibility expectations, but it would be beneficial to aspire to less unexpected build breaking changes in the repo.

justaugustus commented 5 years ago

Thanks for capturing this, Tim! So far, my policy has been, "We're about to make a change that could do weird things, let's tag so we have a sane place to go if things go wrong", but that's not really much of a policy at all.

Let's brainstorm.

/assign @justaugustus @calebamiles @tpepper /priority important-soon /kind documentation /milestone v1.16

tpepper commented 5 years ago

In my humble opinion, the release code will be most maintainable and end user experience (those consuming our generated artifacts) most consistent if the release code is branched in conjunction with k/k.

justaugustus commented 5 years ago

I was thinking the same w.r.t. staying lockstep with k/k. .0 only or all patches?

When would we cut a branch here? Before of after a kubernetes release? Maybe integrate this into our current tooling so we automatically cut a release in k/r when we release k/k?

neolit123 commented 5 years ago

i think what should be done first is manually create all branches in the current support skew based on the current master HEAD:

and adapting test-infra jobs. IMO it's a good start for starting to manage changes targeting different releases.

what repo creates the branch first is a good question.

one problem when matching the k/k and k/release branch creation time is that k/release will also need FF, but i think this is the better option. possibly tooling one day can automatically do that for both repos.

the alternative of creating a branch of k/release only after Kubernetes releases - e.g. create the 1.17 branch only after 1.16 releases, means that targeting changes in current k/release master will affect both 1.17 and 1.16 even after code thaw, which i think is not a good option.

tags in k/release branches on the other hand are more difficult to solve. while k/k can see a lot of tags and changes, k/release can be more stale.

i think that k/release should not have tags as a start, because a k/release change should not break a previous patch release. this means that if a change is k/release is made for 1.14.4 it should not affect e.g. the artifacts for 1.14.3 - the artifacts for 1.14.3 are already out in GCS buckets, but the branches still leave the option to modify the future of the 1.14.x stream.

2c

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

hoegaarden commented 4 years ago

TL;DR: My personal opinion is not to branch k/release, if we can get away with it -- which I think we can.

I think our tools should not really be dependant on the version of k/k. And as far as I can remember from the parts I have seen, this should be relatively easy without ending up with tons of nested ifs and switch/casees.

One of the places where this currently seems not to be true are the package specs. For this case I am of the opinion that those should def. life in k/k anyway and I imagine the following workflow for building packages:

If we wanted we could also document the version of k/release we have used to cut a version of k/k by (automatically) adding a tag like kubernetes-v1.17.0-alpha.3 to the revision of k/release we used. We can also document / log that somewhere else.

I also think it is OK for k/release to support the same versions as k/k at any point in time. E.g. I do not expect k/release to do the right things and work correctly today when I run its tools for / against k/k:v1.3.0. So if we need some feature flags or branches in k/release we could remove them after ~9 months right now.

If we want to introduce automatic branching I guess we'd want something similar we do right now for k/k: When we cut the first beta, anago creates a new branch for k/k, branched off of master. We / anago / krel / GCB could do the same for other repos. However I guess we can only do so "blindly". anago would not have any idea which revision of k-sigs/my-external-thing is compatible with which version of k/k. I think this branch would only be a signal to the k-sigs/my-external-thing maintainers that something happened upstream they might want to care about. And that might be fine and useful for people, e.g. for the k8s.io/perf-test use case? For everything else (e.g. tagging other alphas, non-first-betas, rcs, officials, ...) neither k/k nor k/release would have much of an idea which branch/revision of an external repo is compatible with k/k and what should be tagged. This is IMHO the responsibility of the CI system / the release pipeline of the external repo.

The current tags v0.1.* was IIRC just a quick way to say: OK, we are planning bigger refactors and because we don't have tests and cannot guarantee much we mark the point where we know the thing worked. E.g. the tag v0.2.0 has the message "Release tooling snapshot for Kubernetes v1.17.0". This is not that easy to discover but better than nothing. If we tag we should try to continue to capture intent in the tag's message (or make the tag itself self explanatory, by using something like mentioned above, e.g. kubernetes-v1.17.0-alpha.3).

⓵ I know, not everything will or can be compiled on the spot. There is still stuff that we might need or want to download from some bucket or github or what have you. However, the information which version to download or where to get it should life in k/k and be bumped there, and not in k/release.

justaugustus commented 4 years ago

Thanks for the detailed write-up, @hoegaarden! :heart:

@kubernetes/release-engineering -- Soliciting feedback here.

ref: https://groups.google.com/a/kubernetes.io/d/topic/release-managers/b55uFmJOUME/discussion cc: @mm4tt @nikhita @sttts @dims @liggitt

saschagrunert commented 4 years ago

I’m wondering if it would be good to tag the k/release repo simultaneously to k/k to have a direct link between them.

For example, this way I could easily identify that my fix in v1.17.0 broke the release notes in v1.18.0. :)

tpepper commented 4 years ago

I agree with Hannes' ideal of k/release not being dependent on k/k. But there are sooo many implicit connections and assumptions we continue to discover and puzzle through. It would be great if we were able to fully manage correctly any variances, and do it with minimal conditional code in k/release using feature flags and a deprecation cycle that tracks k/k's. I see that as aspirational though and fear what the reality might become.

In the short term I think it would be easier to sufficiently manage the unknown by peeling a release-X.YY branch off of k/release master in conjunction with k/k's branching. In that case deprecation is simply that an old branch in its entirely goes out of use when the corresponding branch of k/k goes out of support. Development would be on k/release master. Bug fixes might occasionally need cherry picked from k/release master to k/release release-X.YY. We could tag each branch with a k/k version string ahead of checking out that tag's k/release content for use to build the k/k version.

We could also defer that branch creation the point we encounter an incompatible change that is not surmountable by adding a feature flag to conditionally only run the change for newer k/k builds. But in that case, with development happening on k/release master and insufficient ability to test/validate, we need some way of fixing the tooling used for a particular k/k build. Tagging stable points on k/release master seems like a way to do that. But the tools then need to consume k/release on a configurable tag instead of pulling and running master HEAD.

justaugustus commented 4 years ago

SIG Docs is interested in a general solution for branching as well! cc: @zacharysarah xref: https://github.com/kubernetes/test-infra/issues/15779

justaugustus commented 4 years ago

Branching/tagging also raised on the kubeadm out-of-tree KEP: https://github.com/kubernetes/enhancements/pull/1425

mm4tt commented 4 years ago

Summarizing our ask from the email discussion.

SIG Scalability would be also interested in automatic branch cutting. Our use case is quite simple, we'd like to have a k/perf-tests branch (based off k/perfs-test master HEAD) cut for every minor release branch in k/k repo. Currently we need to do it automatically and only @wojtek-t has permissions to do so. Having some kind of automation around it would be really helpful.

justaugustus commented 4 years ago

:wave: @JamesLaverack @evillgenius75 (w.r.t. the relnotes tool versioning)

justaugustus commented 4 years ago

ref on merge-blocking issues:

Sounds good to me. The other way around we could stick to certain releases in the next cycle, so we would not have a need for a merge freeze. WDYT?

@saschagrunert -- That's what I had in mind as well, but it's a process change in a few places, so we need to think through a little bit e.g., Release Managers need to check out k/release@<tag> and images need to be versioned against the tag, which means they to be rebuilt at the tag cut and then promoted.

Let's discuss some of this here --> #857

saschagrunert commented 4 years ago

I'm totally in favor of using tagged releases for k/release. We still have one drawback: Our tooling changes rapidly and we would not have a chance to fix issues as fast as we can do now. :thinking:

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

jimangel commented 4 years ago

/remove-lifecycle rotten

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

saschagrunert commented 3 years ago

Right now we're cutting releases if we think that we've got a fair amount of features in. I think we should not change that until we decide to cut a v1.0.0. Let's close this for now and decide how to move forward at a later point. /close

k8s-ci-robot commented 3 years ago

@saschagrunert: Closing this issue.

In response to [this](https://github.com/kubernetes/release/issues/857#issuecomment-743103722): >Right now we're cutting releases if we think that we've got a fair amount of features in. I think we should not change that until we decide to cut a v1.0.0. Let's close this for now and decide how to move forward at a later point. >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.