Open chrislovecnm opened 3 years ago
@udnay @johnrk lets use this issue to determine our support matrix and branching policy
Documentation to help with this
https://github.com/kubernetes/community/blob/master/contributors/devel/sig-release/cherry-picks.md
Another component figuring out a support matrix https://github.com/kubernetes-sigs/descheduler/issues/273
This is way more than needed, but we can probably use snippets of this https://github.com/kubernetes/sig-release/tree/master/release-engineering/role-handbooks
Kops versioning
Here is how k8s does tagging https://github.com/kubernetes/sig-release/blob/master/release-engineering/versioning.md
Let's punt on this until our hand is forced. We currently use the client libraries for Kubernetes 1.20
which allows for backwards compatibility with 1.18
and 1.19
officially.
Once openshift upgrades and we need to move to 1.21
or 1.22
lets discuss this again. The main issue is the added cost and complexity of adjusting our CI/CD pipelines now before we need to and the extra burden of back ports.
Thoughts @keith-mcclellan @piyush-singh ?
We are going to need to update to controller-runtime 0.9.x on a new branch and put the current master on a new branch as well. We are going to need to maintain a 1.20 and a 1.21 k8s version on different branches and have master at k8s 1.21. K8s 1.19 will only run on the 1.20 branch.
@pseudomuto what did I miss?
This is the question.
Issue
The operator uses the k8s api and we have to follow the versioning practice that k8s maintains.
https://kubernetes.io/docs/concepts/overview/kubernetes-api/
So we cannot run an operator with k8s api 1.22 on an 1.20 k8s cluster.
Open questions
So what do we need to support? How many versions of k8s does the operator need to support? At this point we are upgrading master with the API version.
Solutions
Two ways to handle this in my mind. I would love more ideas
We do need a documented support matrix.
I also do not know how supporting multiple different released versions works with open shift.