alta3 / kubernetes-the-alta3-way

The greatest k8s installer on the planet!
223 stars 34 forks source link

Upgrade to kubernetes v1.29 #103

Closed JoeSpizz closed 6 months ago

JoeSpizz commented 8 months ago

Kubernetes v1.29: Mandala

Supporting component releases

k8s_version: "1.29.2"        # https://kubernetes.io/releases/#release-v1-29
etcd_version: "3.5.12"       # https://github.com/etcd-io/etcd/releases
cni_version: "1.4.1"         # https://github.com/containernetworking/plugins/releases 
containerd_version: "1.7.14" # https://github.com/containerd/containerd/releases
cri_tools_version: "1.28.0"  # https://github.com/kubernetes-sigs/cri-tools/releases
cfssl_version: "1.6.5"       # https://github.com/cloudflare/cfssl/releases
runc_version: "1.1.9"        # https://github.com/opencontainers/runc/releases
coredns_version: "1.11.12"   # https://github.com/coredns/coredns/releases
calico_version: "3.27.2"     # https://github.com/projectcalico/calico/releases
helm_version: "3.14.3"       # https://github.com/helm/helm/releases
gvisor_version: "latest"     # https://github.com/google/gvisor/releases
JoeSpizz commented 7 months ago

Upgrade must happen before May 3rd, can happen any time after April 17th

bryfry commented 6 months ago
JoeSpizz commented 6 months ago

SmokeTested:

{ tl resources-and-scheduling tl setting-an-applications-resource-requirements tl cluster-access-with-kubernetes-context tl hostnames-fqdn-lab tl understanding-labels-and-selectors tl autoscaling-challenge tl expose-service-via-ingress tl revert-coredns-to-kubedns tl multi-container-pod-design tl storage-internal-oot-csi tl strategies tl create-and-consume-secrets tl isolating-resources-with-kubernetes-namespaces tl deploy-a-networkpolicy tl writing-a-deployment-manifest tl exposing-a-service tl host_networking tl create-and-configure-a-replicaset tl listing-resources-with-kubectl-get tl kubectl-top tl rolling-updates-and-rollbacks tl advanced-logging tl examining-resources-with-kubectl-describe tl patching tl install-coredns-lab tl admission-controller tl livenessprobes-and-readinessprobes tl autoscaling tl taints-and-tolerations tl create-and-configure-basic-pods tl horizontal-scaling-with-kubectl-scale tl init-containers tl fluentd tl persistent-configuration-with-configmaps tl configure-coredns-lab tl RBAC-authentication-authorization-lab }

Test Log ``` [+] Cleaning cluster [+] Preparing resources-and-scheduling [+] Setup complete [+] Testing resources-and-scheduling + kubectl apply -f ../yaml/dev-rq.yaml --namespace=dev resourcequota/dev-resourcequota created + kubectl apply -f ../yaml/resourced_deploy.yml deployment.apps/resourced-deploy created + kubectl wait --for condition=Available --timeout 60s --namespace=dev deployments.apps/resourced-deploy deployment.apps/resourced-deploy condition met + kubectl get -f ../yaml/resourced_deploy.yml NAME READY UP-TO-DATE AVAILABLE AGE resourced-deploy 4/5 4 4 7s [+] Test complete [+] Cleaning cluster [+] Preparing setting-an-applications-resource-requirements [+] Setup complete [+] Testing setting-an-applications-resource-requirements + kubectl apply -f ../yaml/linux-pod-r.yaml pod/linux-pod-r created + kubectl wait --for condition=Ready --timeout 30s pod/linux-pod-r pod/linux-pod-r condition met + kubectl apply -f ../yaml/linux-pod-rl.yaml pod/linux-pod-rl created + kubectl wait --for condition=Ready --timeout 30s pod/linux-pod-rl pod/linux-pod-rl condition met [+] Test complete [+] Cleaning cluster [+] Preparing cluster-access-with-kubernetes-context [+] Setup complete [+] Testing cluster-access-with-kubernetes-context + kubectl config use-context kubernetes-the-alta3-way Switched to context "kubernetes-the-alta3-way". + kubectl config set-context dev-context --namespace=dev Context "dev-context" created. + kubectl config use-context dev-context Switched to context "dev-context". + kubectl config set-context dev-context --namespace=dev --user=admin --cluster=kubernetes-the-alta3-way Context "dev-context" modified. + kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * dev-context kubernetes-the-alta3-way admin dev kubernetes-the-alta3-way kubernetes-the-alta3-way admin + kubectl config set-context kubernetes-the-alta3-way --namespace=default Context "kubernetes-the-alta3-way" modified. + kubectl config use-context kubernetes-the-alta3-way Switched to context "kubernetes-the-alta3-way". [+] Test complete [+] Cleaning cluster [+] Preparing hostnames-fqdn-lab [+] Setup complete [+] Testing hostnames-fqdn-lab [+] Test complete [+] Cleaning cluster [+] Preparing understanding-labels-and-selectors [+] Setup complete [+] Testing understanding-labels-and-selectors + kubectl apply -f ../yaml/nginx-pod.yaml pod/nginx created + kubectl wait --for condition=Ready --timeout 30s pod/nginx pod/nginx condition met + kubectl apply -f ../yaml/nginx-obj.yaml deployment.apps/nginx-obj-create created + kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx-obj-create deployment.apps/nginx-obj-create condition met [+] Test complete [+] Cleaning cluster [+] Preparing autoscaling-challenge [+] Setup complete [+] Testing autoscaling-challenge [+] Test complete [+] Cleaning cluster [+] Preparing expose-service-via-ingress [+] Setup complete [+] Testing expose-service-via-ingress [+] Test complete [+] Cleaning cluster [+] Preparing revert-coredns-to-kubedns [+] Setup complete [+] Testing revert-coredns-to-kubedns [+] Test complete [+] Cleaning cluster [+] Preparing multi-container-pod-design [+] Setup complete [+] Testing multi-container-pod-design + kubectl apply -f ../yaml/netgrabber.yaml pod/netgrab created + kubectl wait --for condition=Ready --timeout 60s -f ../yaml/netgrabber.yaml pod/netgrab condition met + kubectl exec netgrab -c busybox -- sh -c 'ping 8.8.8.8 -c 1' PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=117 time=5.991 ms --- 8.8.8.8 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 5.991/5.991/5.991 ms + kubectl apply -f ../yaml/nginx-conf.yaml configmap/nginx-conf created + kubectl apply -f ../yaml/webby-nginx-combo.yaml pod/webby-nginx-combo created + kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-nginx-combo.yaml pod/webby-nginx-combo condition met [+] Test complete [+] Cleaning cluster [+] Preparing storage-internal-oot-csi [+] Setup complete [+] Testing storage-internal-oot-csi [+] Test complete [+] Cleaning cluster [+] Preparing strategies [+] Setup complete [+] Testing strategies [+] Test complete [+] Cleaning cluster [+] Preparing create-and-consume-secrets [+] Setup complete [+] Testing create-and-consume-secrets + kubectl apply -f ../yaml/mysql-secret.yaml secret/mysql-secret created + kubectl apply -f ../yaml/mysql-locked.yaml pod/mysql-locked created + kubectl wait --for condition=Ready --timeout 30s pod/mysql-locked pod/mysql-locked condition met + kubectl get pod mysql-locked NAME READY STATUS RESTARTS AGE mysql-locked 1/1 Running 0 12s + kubectl get secrets mysql-secret NAME TYPE DATA AGE mysql-secret kubernetes.io/basic-auth 1 12s [+] Test complete [+] Cleaning cluster [+] Preparing isolating-resources-with-kubernetes-namespaces [+] Setup complete [+] Testing isolating-resources-with-kubernetes-namespaces + kubectl apply -f ../yaml/test-ns.yaml namespace/test created + kubectl apply -f ../yaml/dev-ns.yaml namespace/dev created + kubectl apply -f ../yaml/prod-ns.yaml namespace/prod created + kubectl apply -f ../yaml/test-rq.yaml --namespace=test resourcequota/test-resourcequota created + kubectl apply -f ../yaml/prod-rq.yaml --namespace=prod resourcequota/prod-resourcequota created + kubectl apply -f ../yaml/prod-rq.yaml --namespace=prod resourcequota/prod-resourcequota unchanged + kubectl get namespaces dev prod test NAME STATUS AGE dev Active 0s prod Active 0s test Active 1s [+] Test complete [+] Cleaning cluster [+] Preparing deploy-a-networkpolicy [+] Setup complete [+] Testing deploy-a-networkpolicy [+] Test complete [+] Cleaning cluster [+] Preparing writing-a-deployment-manifest [+] Setup complete [+] Testing writing-a-deployment-manifest [+] Test complete [+] Cleaning cluster [+] Preparing exposing-a-service [+] Setup complete [+] Testing exposing-a-service [+] Test complete [+] Cleaning cluster [+] Preparing host_networking [+] Setup complete [+] Testing host_networking [+] Test complete [+] Cleaning cluster [+] Preparing create-and-configure-a-replicaset [+] Setup complete [+] Testing create-and-configure-a-replicaset [+] Test complete [+] Cleaning cluster [+] Preparing listing-resources-with-kubectl-get [+] Setup complete [+] Testing listing-resources-with-kubectl-get + kubectl get services -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 172.16.3.1 443/TCP 11m kube-system kube-dns ClusterIP 172.16.3.10 53/UDP,53/TCP 9m18s + kubectl get deployments.apps -A NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system calico-kube-controllers 1/1 1 1 9m34s kube-system kube-dns 1/1 1 1 9m18s + kubectl get secrets No resources found in default namespace. [+] Test complete [+] Cleaning cluster [+] Preparing kubectl-top [+] Setup complete [+] Testing kubectl-top + kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created + kubectl -n kube-system wait --for condition=Available --timeout 100s deployment.apps/metrics-server deployment.apps/metrics-server condition met + kubectl -n kube-system wait --for condition=Available --timeout 30s apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io condition met + kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% node-1 184m 9% 974Mi 25% node-2 241m 12% 1032Mi 27% + kubectl top pods --all-namespaces NAMESPACE NAME CPU(cores) MEMORY(bytes) kube-system calico-kube-controllers-5549cfb998-bvl2c 4m 15Mi kube-system calico-node-qxb7t 41m 115Mi kube-system metrics-server-6d94bc8694-qvgtn 4m 12Mi [+] Test complete [+] Cleaning cluster [+] Preparing rolling-updates-and-rollbacks [+] Setup complete [+] Testing rolling-updates-and-rollbacks [+] Test complete [+] Cleaning cluster [+] Preparing advanced-logging [+] Setup complete [+] Testing advanced-logging + kubectl apply -f ../yaml/counter-pod.yaml pod/counter created + kubectl wait --for condition=Ready --timeout 30s pod/counter pod/counter condition met + kubectl apply -f ../yaml/two-pack.yaml deployment.apps/two-pack created + kubectl wait --for condition=Available --timeout 30s deployment.apps/two-pack deployment.apps/two-pack condition met + kubectl apply -f ../yaml/nginx-rsyslog-pod.yaml pod/nginx-rsyslog-pod created + kubectl wait --for condition=Ready --timeout 30s pod/nginx-rsyslog-pod pod/nginx-rsyslog-pod condition met [+] Test complete [+] Cleaning cluster [+] Preparing examining-resources-with-kubectl-describe [+] Setup complete [+] Testing examining-resources-with-kubectl-describe + kubectl run --port=8888 --image=registry.gitlab.com/alta3/webby webweb pod/webweb created + kubectl wait --for condition=Ready --timeout 60s pod/webweb pod/webweb condition met + kubectl delete pod webweb --now pod "webweb" deleted + kubectl apply -f ../yaml/webweb-deploy.yaml deployment.apps/webweb created + kubectl wait --for condition=Available --timeout 60s deployment.apps/webweb deployment.apps/webweb condition met [+] Test complete [+] Cleaning cluster [+] Preparing patching [+] Setup complete [+] Testing patching [+] Test complete [+] Cleaning cluster [+] Preparing install-coredns-lab [+] Setup complete [+] Testing install-coredns-lab [+] Test complete [+] Cleaning cluster [+] Preparing admission-controller [+] Setup complete [+] Testing admission-controller + kubectl run no-lr --image=nginx:1.19.6 pod/no-lr created + kubectl wait --for condition=Ready --timeout 30s pod/no-lr pod/no-lr condition met + kubectl apply -f ../yaml/lim-ran.yml limitrange/mem-limit-range created + kubectl get limitrange NAME CREATED AT mem-limit-range 2024-05-09T20:17:44Z [+] Test complete [+] Cleaning cluster [+] Preparing livenessprobes-and-readinessprobes [+] Setup complete [+] Testing livenessprobes-and-readinessprobes + kubectl apply -f ../yaml/badpod.yaml pod/badpod created + kubectl wait --for condition=Ready --timeout 30s pod/badpod pod/badpod condition met + kubectl apply -f ../yaml/sise-lp.yaml pod/sise-lp created + kubectl wait --for condition=Ready --timeout 30s pod/sise-lp pod/sise-lp condition met + kubectl apply -f ../yaml/sise-rp.yaml pod/sise-rp created + kubectl wait --for condition=Ready --timeout 30s pod/sise-rp pod/sise-rp condition met + kubectl get pods NAME READY STATUS RESTARTS AGE badpod 1/1 Running 0 44s sise-lp 1/1 Running 0 30s sise-rp 1/1 Running 0 13s [+] Test complete [+] Cleaning cluster [+] Preparing autoscaling [+] Setup complete [+] Testing autoscaling [+] Test complete [+] Cleaning cluster [+] Preparing taints-and-tolerations [+] Setup complete [+] Testing taints-and-tolerations + kubectl apply -f ../yaml/tnt01.yaml deployment.apps/nginx created + kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx deployment.apps/nginx condition met + kubectl delete -f ../yaml/tnt01.yaml deployment.apps "nginx" deleted + kubectl taint nodes node-1 trying_taints=yessir:NoSchedule node/node-1 tainted + kubectl apply -f ../yaml/tnt01.yaml deployment.apps/nginx created + kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx deployment.apps/nginx condition met + kubectl apply -f ../yaml/tnt02.yaml deployment.apps/nginx-tolerated created + kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx-tolerated deployment.apps/nginx-tolerated condition met + kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES badpod 1/1 Terminating 0 57s 192.168.247.15 node-2 nginx-575b786c99-2vnw9 1/1 Running 0 6s 192.168.247.23 node-2 nginx-575b786c99-49kgt 1/1 Running 0 6s 192.168.247.21 node-2 nginx-575b786c99-5kjcs 1/1 Running 0 6s 192.168.247.24 node-2 nginx-575b786c99-6jxrc 1/1 Running 0 6s 192.168.247.29 node-2 nginx-575b786c99-7d7gg 1/1 Running 0 6s 192.168.247.27 node-2 nginx-575b786c99-f8nmd 1/1 Running 0 6s 192.168.247.26 node-2 nginx-575b786c99-g94mk 1/1 Running 0 6s 192.168.247.22 node-2 nginx-575b786c99-lxlc7 1/1 Running 0 6s 192.168.247.30 node-2 nginx-575b786c99-pmck7 1/1 Running 0 6s 192.168.247.28 node-2 nginx-575b786c99-qkdtt 1/1 Running 0 6s 192.168.247.25 node-2 nginx-tolerated-54c4f8dcd-2ccbz 1/1 Running 0 2s 192.168.84.153 node-1 nginx-tolerated-54c4f8dcd-6cvf7 1/1 Running 0 2s 192.168.247.34 node-2 nginx-tolerated-54c4f8dcd-88cpr 1/1 Running 0 2s 192.168.247.31 node-2 nginx-tolerated-54c4f8dcd-cnwmb 1/1 Running 0 2s 192.168.247.32 node-2 nginx-tolerated-54c4f8dcd-m6vkx 1/1 Running 0 2s 192.168.84.152 node-1 nginx-tolerated-54c4f8dcd-pp8j4 1/1 Running 0 2s 192.168.84.154 node-1 nginx-tolerated-54c4f8dcd-q2q42 1/1 Running 0 2s 192.168.84.155 node-1 nginx-tolerated-54c4f8dcd-q48xw 1/1 Running 0 2s 192.168.247.33 node-2 nginx-tolerated-54c4f8dcd-q8h25 1/1 Running 0 2s 192.168.84.157 node-1 nginx-tolerated-54c4f8dcd-zxbhs 1/1 Running 0 2s 192.168.84.156 node-1 sise-lp 1/1 Terminating 0 43s 192.168.84.145 node-1 sise-rp 1/1 Terminating 0 26s 192.168.84.146 node-1 [+] Test complete [+] Cleaning cluster [+] Preparing create-and-configure-basic-pods [+] Setup complete [+] Testing create-and-configure-basic-pods + kubectl apply -f ../yaml/simpleservice.yaml pod/simpleservice created + kubectl wait --for condition=Ready --timeout 60s -f ../yaml/simpleservice.yaml pod/simpleservice condition met + kubectl apply -f ../yaml/webby-pod01.yaml pod/webservice01 created + kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-pod01.yaml pod/webservice01 condition met [+] Test complete [+] Cleaning cluster [+] Preparing horizontal-scaling-with-kubectl-scale [+] Setup complete [+] Testing horizontal-scaling-with-kubectl-scale deployment.apps/webservice created deployment.apps/webservice condition met deployment.apps/webservice scaled deployment.apps/webservice condition met NAME READY UP-TO-DATE AVAILABLE AGE webservice 1/1 1 1 3s [+] Test complete [+] Cleaning cluster [+] Preparing init-containers [+] Setup complete [+] Testing init-containers + kubectl apply -f ../yaml/init-cont-pod.yaml pod/myapp-pod created + kubectl apply -f ../yaml/myservice.yaml service/myservice created + kubectl apply -f ../yaml/mydb.yaml service/mydb created + kubectl wait --for condition=Ready --timeout 30s -f ../yaml/init-cont-pod.yaml pod/myapp-pod condition met + kubectl get pods NAME READY STATUS RESTARTS AGE badpod 1/1 Terminating 1 (24s ago) 85s myapp-pod 1/1 Running 0 15s simpleservice 1/1 Terminating 0 25s [+] Test complete [+] Cleaning cluster [+] Preparing fluentd [+] Setup complete [+] Testing fluentd + kubectl apply -f ../yaml/fluentd-conf.yaml configmap/fluentd-config created + kubectl apply -f ../yaml/fluentd-pod.yaml pod/logger created + kubectl wait --for condition=Ready --timeout 30s pod/logger pod/logger condition met + kubectl apply -f - ++ hostname -f + BCHD_IP=bchd.43a38423-fd55-4616-9193-12f8fafe7fce + j2 ../yaml/http_fluentd_config.yaml configmap/fluentd-config configured + kubectl apply -f ../yaml/http_fluentd.yaml pod/http-logger created + kubectl wait --for condition=Ready --timeout 30s pod/http-logger pod/http-logger condition met + kubectl get pods NAME READY STATUS RESTARTS AGE http-logger 2/2 Running 0 5s logger 2/2 Running 0 20s myapp-pod 1/1 Terminating 0 37s [+] Test complete [+] Cleaning cluster [+] Preparing persistent-configuration-with-configmaps [+] Setup complete [+] Testing persistent-configuration-with-configmaps configmap/nginx-base-conf created configmap/index-html-zork created configmap/nineteen-eighty-four created pod/nginx-configured created pod/nginx-configured condition met NAME READY STATUS RESTARTS AGE nginx-configured 1/1 Running 0 2s [+] Test complete [+] Cleaning cluster [+] Preparing configure-coredns-lab [+] Setup complete [+] Testing configure-coredns-lab [+] Test complete [+] Cleaning cluster [+] Preparing RBAC-authentication-authorization-lab [+] Setup complete [+] Testing RBAC-authentication-authorization-lab + kubectl apply -f ../yaml/t3-support.yaml role.rbac.authorization.k8s.io/t3-support created + kubectl apply -f ../yaml/alice-csr.yaml certificatesigningrequest.certificates.k8s.io/alice created + kubectl certificate approve alice certificatesigningrequest.certificates.k8s.io/alice approved + kubectl apply -f ../yaml/t3-support-binding.yaml rolebinding.rbac.authorization.k8s.io/t3-support created [+] Test complete ```
JoeSpizz commented 6 months ago

No errors!

JoeSpizz commented 6 months ago

COMPLETE