Open danports opened 1 month ago
Some options discussed in office hours:
kops update cluster --phase
concept to conditionally apply tasks for just control plane vs nodes. perhaps with an --instance-group-role
field to match terminology in kops rolling-update cluster
update cluster --yes
and rolling-update cluster --yes
together, allowing for the sequence of task applies to handled internally. This could be a new flag in kops rolling-update cluster
or kops upgrade cluster
.kops update cluster --yes
We'll likely start with the first option and see how the ergonomics of the second option feel, given that it depends on the first option.
In either case we'll add upgrade instructions to the release notes for this new behavior.
/kind blocks-next
@rifelpet: The label(s) kind/blocks-next
cannot be applied, because the repository doesn't have them.
Another option, and possibly a more correct one, would be to enforce the version skew https://kubernetes.io/releases/version-skew-policy/#kubelet
As such, the userdata for instance groups shouldn't be updated, until the control plane is already rolled out to a newer version, thus ensuring that we never have nodes coming up with a kubelet version that is more recent than any control plane node.
E.g: kops update cluster would:
In this situation:
Optionally, similar to the suggested above, a flag for going through the whole procedure like --sync or --wait could:
/kind bug
1. What
kops
version are you running? The commandkops version
, will display this information. 1.31.0-alpha.12. What Kubernetes version are you running?
kubectl version
will print the version if a cluster is running or provide the Kubernetes version specified as akops
flag. Upgrading from 1.30.5 to 1.31.1.3. What cloud provider are you using? AWS
4. What commands did you run? What is the simplest way to reproduce this issue? Update the cluster
kubernetesVersion
and then run:kops update cluster
kops rolling-update cluster
5. What happened after the commands executed? The rolling-update got stuck in a validation loop and eventually timed out, because pods on the new worker nodes created by Karpenter after
kops update cluster
failed to start as described in https://github.com/kubernetes/kubernetes/issues/127316.6. What did you expect to happen? Would have been great if the rolling update completed without errors.
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest. You may want to remove your cluster name and other sensitive information. Only relevant part here is having Karpenter enabled and then upgrading the Kubernetes version to 1.31.1.8. Please run the commands with most verbose logging by adding the
-v 10
flag. Paste the logs into this report, or in a gist and provide the gist link here. Rolling update validation loop outputs things like this over and over:Upon describing one of those pods:
9. Anything else we need to know? It should be possible to work around this issue by pausing autoscaling before
kops update cluster
until afterkops rolling-update cluster
has replaced all of the control plane nodes, or with judicious use ofkops rolling-update cluster --cloudonly
.