After upgrade there appears to be something similar to a duplicate instance of k8 running.
Simple command to view the nodes, shows the output below, followed by a refresh a few second later.
Upgrade was done by running the k0sctl apply --config k0sctl.yaml
kubectl get deployment -o wide - shows the deployments ready, and then a refresh of command shows 0/1 ready
The deployment is not changing state their seems to be a duplicate deployment runnning.
All servers are baremetal running UBUNTU
$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
snow-node-200 Ready <none> 16d v1.27.4+k0s 192.168.10.200 <none> Ubuntu 22.04.2 LTS 5.15.0-78-generic containerd://1.7.2
snow-node-201 NotReady <none> 16d v1.27.4+k0s 192.168.10.201 <none> Ubuntu 22.04.2 LTS 5.15.0-76-generic containerd://1.7.2
$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
snow-node-200 NotReady,SchedulingDisabled <none> 21h v1.27.4+k0s 192.168.10.200 <none> Ubuntu 22.04.2 LTS 5.15.0-78-generic containerd://1.7.2
$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
snow-node-200 Ready <none> 16d v1.27.4+k0s 192.168.10.200 <none> Ubuntu 22.04.2 LTS 5.15.0-78-generic containerd://1.7.2
snow-node-201 NotReady <none> 16d v1.27.4+k0s 192.168.10.201 <none> Ubuntu 22.04.2 LTS 5.15.0-76-generic containerd://1.7.2
After upgrade there appears to be something similar to a duplicate instance of k8 running. Simple command to view the nodes, shows the output below, followed by a refresh a few second later. Upgrade was done by running the k0sctl apply --config k0sctl.yaml kubectl get deployment -o wide - shows the deployments ready, and then a refresh of command shows 0/1 ready
The deployment is not changing state their seems to be a duplicate deployment runnning.
All servers are baremetal running UBUNTU