This project contains Terraform scripts to provision cloud infrastructure resources, when using vSphere, and Ansible to apply the needed elements of a Kubernetes cluster that are required to deploy SAS Viya platform product offerings.
Updated the default kubernetes_version/cluster_version in the example files and doc to 1.28.7
Updated the default kubectl version in the Dockerfile to 1.28.7 (currently the latest)
Notes:
There was an issue discovered with kube-vip and K8s 1.29+. In short kube-vip requires super-admin.conf permissions with Kubernetes 1.29 and without it, we run into issues setting up a new cluster with kubeadm init.
super-admin.conf was introduced Kubernetes 1.29, and the user within that file is bound to the system:masters RBAC group. In previous kubernetes versions the admin.conf user was bound to this RBAC group, but now in 1.29 this user is bound to a new group called kubeadm:cluster-admins that has cluster-adminClusterRole access.
If you take a look at the 1.29 Urgent Upgrade Notes
from the kubernetes repo this change is described in more detail:
kubeadm: a separate "super-admin.conf" file is now deployed. The User in admin.conf is now bound to a new RBAC Group kubeadm:cluster-admins that has cluster-adminClusterRole access. The User in super-admin.conf is now bound to the system:masters built-in super-powers / break-glass Group that can bypass RBAC. Before this change, the default admin.conf was bound to system:masters Group, which was undesired. Executing kubeadm init phase kubeconfig all or just kubeadm init will now generate the new super-admin.conf file. The cluster admin can then decide to keep the file present on a node host or move it to a safe location. kubadm certs renew will renew the certificate in super-admin.conf to one year if the file exists; if it does not exist a "MISSING" note will be printed. kubeadm upgrade apply for this release will migrate this particular node to the two file setup. Subsequent kubeadm releases will continue to optionally renew the certificate in super-admin.conf if the file exists on disk and if renew on upgrade is not disabled. kubeadm join --control-plane will now generate only an admin.conf file that has the less privileged User.
At this point in time, kube-vip (even the latest versions) requires super-admin.conf with Kubernetes 1.29 during the initial kubeadm init phase and will fail without it as described in this GitHub issue here: https://github.com/kube-vip/kube-vip/issues/684. Our PR makes use of a workaround recommended in that GitHub issue where we're temporarily replacing the mounted kube conf file in the kube-vip.yaml manifest with super-admin.conf manifest before running kubeadm init and then immediately replacing it with admin.conf after the command is run.
We will have to keep using the workaround for 1.29+ until a version of kube-vip is released that resolves this issue. After the fix is in place we can remove the workaround and point users to select a version of kube-vip with that particular fix for K8s 1.29+ installs.
Changes
kubernetes_version
/cluster_version
in the example files and doc to 1.28.7kubectl
version in the Dockerfile to 1.28.7 (currently the latest)Notes:
There was an issue discovered with
kube-vip
and K8s 1.29+. In short kube-vip requiressuper-admin.conf
permissions with Kubernetes 1.29 and without it, we run into issues setting up a new cluster withkubeadm init
.super-admin.conf
was introduced Kubernetes 1.29, and the user within that file is bound to thesystem:masters
RBAC group. In previous kubernetes versions theadmin.conf
user was bound to this RBAC group, but now in 1.29 this user is bound to a new group calledkubeadm:cluster-admins
that hascluster-admin
ClusterRole
access.If you take a look at the 1.29 Urgent Upgrade Notes from the kubernetes repo this change is described in more detail:
At this point in time,
kube-vip
(even the latest versions) requiressuper-admin.conf
with Kubernetes 1.29 during the initialkubeadm init
phase and will fail without it as described in this GitHub issue here: https://github.com/kube-vip/kube-vip/issues/684. Our PR makes use of a workaround recommended in that GitHub issue where we're temporarily replacing the mounted kube conf file in thekube-vip.yaml
manifest withsuper-admin.conf
manifest before runningkubeadm init
and then immediately replacing it withadmin.conf
after the command is run.We will have to keep using the workaround for 1.29+ until a version of
kube-vip
is released that resolves this issue. After the fix is in place we can remove the workaround and point users to select a version ofkube-vip
with that particular fix for K8s 1.29+ installs.Tests