alta3 / kubernetes-the-alta3-way

The greatest k8s installer on the planet!
223 stars 34 forks source link

Upgrade to kubernetes v1.25 #26

Closed bryfry closed 1 year ago

bryfry commented 2 years ago

Kubernetes v1.25: Combiner

supporting component releases

bryfry commented 1 year ago
student@bchd:~/git/kubernetes-the-alta3-way$ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
node-1   Ready    <none>   56s   v1.25.5
node-2   Ready    <none>   61s   v1.25.5
bryfry commented 1 year ago

multi-container-pod-design

Fixed, needed to merge in changes from main 🤦

Not really an error ``` [+] Testing multi-container-pod-design + kubectl apply -f ../yaml/netgrabber.yaml pod/netgrab created + kubectl wait --for condition=Ready --timeout 60s -f ../yaml/netgrabber.yaml pod/netgrab condition met + kubectl exec netgrab -c busybox -- sh -c 'ping 8.8.8.8 -c 1' PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=118 time=7.309 ms --- 8.8.8.8 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 7.309/7.309/7.309 ms + kubectl apply -f ../yaml/nginx-conf.yaml configmap/nginx-conf created + kubectl apply -f ../yaml/webby-nginx-combo.yaml pod/webby-nginx-combo created + kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-nginx-combo.yaml error: timed out waiting for the condition on pods/webby-nginx-combo ```
bryfry commented 1 year ago

persistent-configuration-with-configmaps

student@bchd:~/git/kubernetes-the-alta3-way$ tl persistent-configuration-with-configmaps                                                                                                                                    
[+] Cleaning cluster                                                                                                                                                                                                        
[+] Preparing persistent-configuration-with-configmaps                                                                                                                                                                      
[+] Setup complete                                                                                                                                                                                                          
[+] Testing persistent-configuration-with-configmaps                                                                                                                                                                        
error: error reading mycode/config/nginx-base.conf: no such file or directory                                                                                                                                               
error: error reading mycode/config/index-html-zork.html: no such file or directory                                                                                                                                          
error: error reading mycode/config/nineteen-eighty-four.txt: no such file or directory                                                                                                                                      
pod/nginx-configured created                                                                                                                                                                                                [-] Test failed         

Fixed in https://github.com/alta3/kubernetes-the-alta3-way/commit/2c586c291b3c2a0d72993c4500e86a7b20bee2f0

student@bchd:~/git/kubernetes-the-alta3-way$ tl persistent-configuration-with-configmaps
[+] Cleaning cluster
[+] Preparing persistent-configuration-with-configmaps
[+] Setup complete
[+] Testing persistent-configuration-with-configmaps
configmap/nginx-base-conf created
configmap/index-html-zork created
configmap/nineteen-eighty-four created
pod/nginx-configured created
pod/nginx-configured condition met
NAME               READY   STATUS    RESTARTS   AGE
nginx-configured   1/1     Running   0          1s
[+] Test complete

Additional issue - teardown isn't cleaning up configmaps


student@bchd:~/git/kubernetes-the-alta3-way$ tl persistent-configuration-with-configmaps
[+] Cleaning cluster
[+] Preparing persistent-configuration-with-configmaps
[+] Setup complete
[+] Testing persistent-configuration-with-configmaps
error: failed to create configmap: configmaps "nginx-base-conf" already exists
error: failed to create configmap: configmaps "index-html-zork" already exists
error: failed to create configmap: configmaps "nineteen-eighty-four" already exists
pod/nginx-configured created
pod/nginx-configured condition met
NAME               READY   STATUS    RESTARTS   AGE
nginx-configured   1/1     Running   0          1s
[+] Test complete

fixed in https://github.com/alta3/kubernetes-the-alta3-way/commit/61256a4eb0abccc2b1e3a0f1dad6552e22d2d96f
bryfry commented 1 year ago

⚠️ horizontal-scaling-with-kubectl-scale

student@bchd:~/git/kubernetes-the-alta3-way$ tl horizontal-scaling-with-kubectl-scale
[+] Cleaning cluster
[+] Preparing horizontal-scaling-with-kubectl-scale
[+] Setup complete
[+] Testing horizontal-scaling-with-kubectl-scale
deployment.apps/webservice created
Error from server (NotFound): deployments.apps "sise-deploy" not found
Error from server (NotFound): deployments.apps "sise-deploy" not found
deployment.apps/webservice unchanged
error: the path "deployments/webby-deploy" does not exist
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
webservice   0/3     3            0           1s
Error from server (NotFound): deployments.apps "sise-deploy" not found
[-] Test failed

This test.sh doesn't match with the supplied manifest labs/yaml/webby-deploy-filled-out.yaml - Creating a new issue.

bryfry commented 1 year ago

init-containers

student@bchd:~/git/kubernetes-the-alta3-way$ tl init-containers                                                                                                                                                              
[+] Cleaning cluster                                                                                                                                                                                                        
[+] Preparing init-containers                                                                                                                                                                                               
[+] Setup complete                                                                                                                                                                                                          
[+] Testing init-containers                                                                                                                                                                                                 
+ kubectl apply -f ../yaml/init-cont-pod.yaml                                                                                                                                                                               
pod/myapp-pod created                                                                                                                                                                                                       
+ kubectl apply -f ../yaml/myservice.yaml
service/myservice created
+ kubectl apply -f ../yaml/mydb.yaml
service/mydb created
+ kubectl wait --for condition=Ready --timeout 30s -f ../yaml/init-cont-pod.yaml
error: timed out waiting for the condition on pods/myapp-pod
[-] Test failed

exec'd into container with kubectl exec -it myapp-pod -c init-myservice -- /bin/sh

/ # nslookup myservice                                                                                                                                                                                                      
Server:         172.16.3.10                                                                                                                                                                                                 
Address:        172.16.3.10:53                                                                                                                                                                                              

Non-authoritative answer:                                                                                                                                                                                                   
Name:   myservice.default.svc.cluster.local                                                                                                                                                                                 
Address: 172.16.3.81                                                                                                                                                                                                                                                                                                                                                                                                                      
** server can't find myservice.svc.cluster.local: NXDOMAIN                                                                                                                                                                                                                                                                                                                                                                                 
** server can't find myservice.cluster.local: NXDOMAIN                                                                                                                                                                                                                                                                                                                                                                                          
** server can't find myservice.svc.cluster.local: NXDOMAIN                                                                                                                                                                                                                                                                                                                                                                                     
** server can't find myservice.cluster.local: NXDOMAIN                                                                                                                                                                                                                                                                                                                                                                                 
/ # echo $?                                                                                                                                                                                                                 
1     

Expected:

/ # nslookup myservice
Server:         172.16.3.10
Address:        172.16.3.10:53

Non-authoritative answer:
Name:   myservice.default.svc.cluster.local
Address: 172.16.3.166

*** Can't find myservice.svc.cluster.local: No answer
*** Can't find myservice.cluster.local: No answer
*** Can't find myservice.default.svc.cluster.local: No answer
*** Can't find myservice.svc.cluster.local: No answer
*** Can't find myservice.cluster.local: No answer

/ # echo $?
0

Fixed in https://github.com/alta3/kubernetes-the-alta3-way/commit/f866c8714723d3ddca0bce2577a5c03cc20bab31 - set busybox version to 1.34.0

student@bchd:~/git/kubernetes-the-alta3-way$ tl init-containers                                                                                                                                                              
[+] Cleaning cluster
[+] Preparing init-containers
[+] Setup complete
[+] Testing init-containers
+ kubectl apply -f ../yaml/init-cont-pod.yaml
pod/myapp-pod created
+ kubectl apply -f ../yaml/myservice.yaml
service/myservice created
+ kubectl apply -f ../yaml/mydb.yaml
service/mydb created
+ kubectl wait --for condition=Ready --timeout 30s -f ../yaml/init-cont-pod.yaml
pod/myapp-pod condition met
+ kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
myapp-pod   1/1     Running   0          20s
[+] Test complete
bryfry commented 1 year ago

examining-resources-with-kubectl-describe

student@bchd:~/git/kubernetes-the-alta3-way$ tl examining-resources-with-kubectl-describe
[+] Cleaning cluster
[+] Preparing examining-resources-with-kubectl-describe
[+] Setup complete
[+] Testing examining-resources-with-kubectl-describe
+ kubectl run --port=8888 --image=registry.gitlab.com/alta3research/webby webweb
pod/webweb created
+ kubectl wait --for condition=Ready --timeout 60s pod/webweb
error: timed out waiting for the condition on pods/webweb
[-] Test failed

Fixed in https://github.com/alta3/kubernetes-the-alta3-way/commit/3eaa456c7a147589d6cd338192bcfd739b71072c - "webby to pull from correct url"

student@bchd:~/git/kubernetes-the-alta3-way$ tl examining-resources-with-kubectl-describe
[+] Cleaning cluster
[+] Preparing examining-resources-with-kubectl-describe
[+] Setup complete
[+] Testing examining-resources-with-kubectl-describe
+ kubectl run --port=8888 --image=registry.gitlab.com/alta3/webby webweb
pod/webweb created
+ kubectl wait --for condition=Ready --timeout 60s pod/webweb
pod/webweb condition met
+ kubectl delete pod webweb --now
pod "webweb" deleted
+ kubectl apply -f ../yaml/webweb-deploy.yaml
deployment.apps/webweb created
+ kubectl wait --for condition=Available --timeout 60s deployment.apps/webweb
deployment.apps/webweb condition met
[+] Test complete
bryfry commented 1 year ago

create-and-configure-basic-pods

student@bchd:~/git/kubernetes-the-alta3-way$ tl create-and-configure-basic-pods
[+] Testing create-and-configure-basic-pods
+ kubectl apply -f ../yaml/simpleservice.yaml
pod/simpleservice created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/simpleservice.yaml
pod/simpleservice condition met
+ kubectl apply -f ../yaml/webby-pod01.yaml
pod/webservice01 created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-pod01.yaml
error: timed out waiting for the condition on pods/webservice01
[-] Test failed

Fixed in https://github.com/alta3/kubernetes-the-alta3-way/commit/a9852c60e1ee83a211dfe28ed135dc801077f4c0 "webby to pull from correct url"

student@bchd:~/git/kubernetes-the-alta3-way$ tl create-and-configure-basic-pods
[+] Cleaning cluster
[+] Preparing create-and-configure-basic-pods
[+] Setup complete
[+] Testing create-and-configure-basic-pods
+ kubectl apply -f ../yaml/simpleservice.yaml
pod/simpleservice created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/simpleservice.yaml
pod/simpleservice condition met
+ kubectl apply -f ../yaml/webby-pod01.yaml
pod/webservice01 created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-pod01.yaml
pod/webservice01 condition met
[+] Test complete
bryfry commented 1 year ago

Testing complete

Ready for review @BicycleWalrus @sfeeser

Notes

Testing commands example:

{
  tl kubectl-top
  tl multi-container-pod-design
  tl admission-controller
  tl taints-and-tolerations
  tl setting-an-applications-resource-requirements
  tl fluentd
  tl livenessprobes-and-readinessprobes
  tl listing-resources-with-kubectl-get
  tl isolating-resources-with-kubernetes-namespaces
  tl horizontal-scaling-with-kubectl-scale
  tl cluster-access-with-kubernetes-context
  tl create-and-configure-basic-pods
  tl persistent-configuration-with-configmaps
  tl create-and-consume-secrets
  tl init-containers
  tl examining-resources-with-kubectl-describe
  tl understanding-labels-and-selectors
} | tee $(date -I)_tl.log
Full test log - 2022-12-13_tl.log ``` [+] Cleaning cluster [+] Preparing kubectl-top [+] Setup complete [+] Testing kubectl-top + kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created + kubectl -n kube-system wait --for condition=Available --timeout 100s deployment.apps/metrics-server deployment.apps/metrics-server condition met + kubectl -n kube-system wait --for condition=Available --timeout 30s apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io condition met + kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% node-1 270m 13% 1427Mi 37% node-2 91m 4% 1436Mi 37% + kubectl top pods --all-namespaces NAMESPACE NAME CPU(cores) MEMORY(bytes) kube-system calico-kube-controllers-64665b4556-qn2cj 5m 17Mi kube-system calico-node-4gf96 43m 79Mi kube-system calico-node-fxj99 36m 80Mi kube-system kube-dns-8c4b68cc7-8sj25 3m 19Mi kube-system metrics-server-8ff8f88c6-mnbzs 86m 12Mi [+] Test complete [+] Cleaning cluster [+] Preparing multi-container-pod-design [+] Setup complete [+] Testing multi-container-pod-design + kubectl apply -f ../yaml/netgrabber.yaml pod/netgrab created + kubectl wait --for condition=Ready --timeout 60s -f ../yaml/netgrabber.yaml pod/netgrab condition met + kubectl exec netgrab -c busybox -- sh -c 'ping 8.8.8.8 -c 1' PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=118 time=6.318 ms --- 8.8.8.8 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 6.318/6.318/6.318 ms + kubectl apply -f ../yaml/nginx-conf.yaml configmap/nginx-conf created + kubectl apply -f ../yaml/webby-nginx-combo.yaml pod/webby-nginx-combo created + kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-nginx-combo.yaml pod/webby-nginx-combo condition met [+] Test complete [+] Cleaning cluster [+] Preparing admission-controller [+] Setup complete [+] Testing admission-controller + kubectl run no-lr --image=nginx:1.19.6 pod/no-lr created + kubectl wait --for condition=Ready --timeout 30s pod/no-lr pod/no-lr condition met + kubectl apply -f ../yaml/lim-ran.yml limitrange/mem-limit-range created + kubectl get limitrange NAME CREATED AT mem-limit-range 2022-12-13T05:01:29Z [+] Test complete [+] Cleaning cluster [+] Preparing taints-and-tolerations [+] Setup complete [+] Testing taints-and-tolerations + kubectl apply -f ../yaml/tnt01.yaml deployment.apps/nginx created + kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx deployment.apps/nginx condition met + kubectl delete -f ../yaml/tnt01.yaml deployment.apps "nginx" deleted + kubectl taint nodes node-1 trying_taints=yessir:NoSchedule node/node-1 tainted + kubectl apply -f ../yaml/tnt01.yaml deployment.apps/nginx created + kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx deployment.apps/nginx condition met + kubectl apply -f ../yaml/tnt02.yaml deployment.apps/nginx-tolerated created + kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx-tolerated deployment.apps/nginx-tolerated condition met + kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES netgrab 2/2 Terminating 0 34s 192.168.84.181 node-1 nginx-677bf5c469-49zww 1/1 Running 0 18s 192.168.247.32 node-2 nginx-677bf5c469-4sn42 1/1 Running 0 18s 192.168.247.48 node-2 nginx-677bf5c469-5x5gp 1/1 Running 0 18s 192.168.247.47 node-2 nginx-677bf5c469-8pzb6 1/1 Running 0 18s 192.168.247.33 node-2 nginx-677bf5c469-dq9lg 1/1 Running 0 18s 192.168.247.34 node-2 nginx-677bf5c469-dxs2l 1/1 Running 0 18s 192.168.247.28 node-2 nginx-677bf5c469-jzghh 1/1 Running 0 18s 192.168.247.27 node-2 nginx-677bf5c469-rh5wg 1/1 Running 0 18s 192.168.247.31 node-2 nginx-677bf5c469-vwds6 1/1 Running 0 18s 192.168.247.30 node-2 nginx-677bf5c469-xfdxf 1/1 Running 0 18s 192.168.247.29 node-2 nginx-tolerated-76f4469cbd-4gxzl 1/1 Running 0 4s 192.168.84.131 node-1 nginx-tolerated-76f4469cbd-c98js 1/1 Running 0 4s 192.168.84.189 node-1 nginx-tolerated-76f4469cbd-cpqth 1/1 Running 0 4s 192.168.247.38 node-2 nginx-tolerated-76f4469cbd-jwl9h 1/1 Running 0 4s 192.168.84.188 node-1 nginx-tolerated-76f4469cbd-kkprd 1/1 Running 0 4s 192.168.247.46 node-2 nginx-tolerated-76f4469cbd-kvk4x 1/1 Running 0 4s 192.168.84.129 node-1 nginx-tolerated-76f4469cbd-qmrxp 0/1 ContainerCreating 0 4s node-2 nginx-tolerated-76f4469cbd-whj7z 1/1 Running 0 4s 192.168.84.190 node-1 nginx-tolerated-76f4469cbd-xx4xg 1/1 Running 0 4s 192.168.84.191 node-1 nginx-tolerated-76f4469cbd-zwzb8 1/1 Running 0 4s 192.168.247.42 node-2 [+] Test complete [+] Cleaning cluster [+] Preparing setting-an-applications-resource-requirements [+] Setup complete [+] Testing setting-an-applications-resource-requirements + kubectl apply -f ../yaml/linux-pod-r.yaml pod/linux-pod-r created + kubectl wait --for condition=Ready --timeout 30s pod/linux-pod-r pod/linux-pod-r condition met + kubectl apply -f ../yaml/linux-pod-rl.yaml pod/linux-pod-rl created + kubectl wait --for condition=Ready --timeout 30s pod/linux-pod-rl pod/linux-pod-rl condition met [+] Test complete [+] Cleaning cluster [+] Preparing fluentd [+] Setup complete [+] Testing fluentd + kubectl apply -f ../yaml/fluentd-conf.yaml configmap/fluentd-config created + kubectl apply -f ../yaml/fluentd-pod.yaml pod/logger created + kubectl wait --for condition=Ready --timeout 30s pod/logger pod/logger condition met ++ hostname -f + kubectl apply -f - + BCHD_IP=bchd.76a25a07-24e7-4f6a-a449-01fb957d90e3 + j2 ../yaml/http_fluentd_config.yaml configmap/fluentd-config configured + kubectl apply -f ../yaml/http_fluentd.yaml pod/http-logger created + kubectl wait --for condition=Ready --timeout 30s pod/http-logger pod/http-logger condition met + kubectl get pods NAME READY STATUS RESTARTS AGE http-logger 2/2 Running 0 3s linux-pod-r 1/1 Terminating 0 14s linux-pod-rl 1/1 Terminating 0 10s logger 2/2 Running 0 5s [+] Test complete [+] Cleaning cluster [+] Preparing livenessprobes-and-readinessprobes [+] Setup complete [+] Testing livenessprobes-and-readinessprobes + kubectl apply -f ../yaml/badpod.yaml pod/badpod created + kubectl wait --for condition=Ready --timeout 30s pod/badpod pod/badpod condition met + kubectl apply -f ../yaml/sise-lp.yaml pod/sise-lp created + kubectl wait --for condition=Ready --timeout 30s pod/sise-lp pod/sise-lp condition met + kubectl apply -f ../yaml/sise-rp.yaml pod/sise-rp created + kubectl wait --for condition=Ready --timeout 30s pod/sise-rp pod/sise-rp condition met + kubectl get pods NAME READY STATUS RESTARTS AGE badpod 1/1 Running 0 16s http-logger 2/2 Terminating 0 21s linux-pod-r 1/1 Terminating 0 32s linux-pod-rl 1/1 Terminating 0 28s logger 2/2 Terminating 0 23s sise-lp 1/1 Running 0 14s sise-rp 1/1 Running 0 12s [+] Test complete [+] Cleaning cluster [+] Preparing listing-resources-with-kubectl-get [+] Setup complete [+] Testing listing-resources-with-kubectl-get + kubectl get services -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 172.16.3.1 443/TCP 125m kube-system kube-dns ClusterIP 172.16.3.10 53/UDP,53/TCP 123m + kubectl get deployments.apps -A NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system calico-kube-controllers 1/1 1 1 123m kube-system kube-dns 1/1 1 1 123m + kubectl get secrets No resources found in default namespace. [+] Test complete [+] Cleaning cluster [+] Preparing isolating-resources-with-kubernetes-namespaces [+] Setup complete [+] Testing isolating-resources-with-kubernetes-namespaces + kubectl apply -f ../yaml/test-ns.yaml namespace/test created + kubectl apply -f ../yaml/dev-ns.yaml namespace/dev created + kubectl apply -f ../yaml/prod-ns.yaml namespace/prod created + kubectl apply -f ../yaml/test-rq.yaml --namespace=test resourcequota/test-resourcequota created + kubectl apply -f ../yaml/prod-rq.yaml --namespace=prod resourcequota/prod-resourcequota created + kubectl apply -f ../yaml/prod-rq.yaml --namespace=prod resourcequota/prod-resourcequota unchanged + kubectl get namespaces dev prod test NAME STATUS AGE dev Active 1s prod Active 1s test Active 2s [+] Test complete [+] Cleaning cluster [+] Preparing horizontal-scaling-with-kubectl-scale [+] Setup complete [+] Testing horizontal-scaling-with-kubectl-scale deployment.apps/webservice created deployment.apps/webservice condition met deployment.apps/webservice scaled deployment.apps/webservice unchanged error: the path "deployments/webby-deploy" does not exist NAME READY UP-TO-DATE AVAILABLE AGE webservice 3/3 3 3 3s [+] Test complete [+] Cleaning cluster [+] Preparing cluster-access-with-kubernetes-context [+] Setup complete [+] Testing cluster-access-with-kubernetes-context + kubectl config use-context kubernetes-the-alta3-way Switched to context "kubernetes-the-alta3-way". + kubectl config set-context dev-context --namespace=dev Context "dev-context" created. + kubectl config use-context dev-context Switched to context "dev-context". + kubectl config set-context dev-context --namespace=dev --user=admin --cluster=kubernetes-the-alta3-way Context "dev-context" modified. + kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * dev-context kubernetes-the-alta3-way admin dev kubernetes-the-alta3-way kubernetes-the-alta3-way admin default + kubectl config set-context kubernetes-the-alta3-way --namespace=default Context "kubernetes-the-alta3-way" modified. + kubectl config use-context kubernetes-the-alta3-way Switched to context "kubernetes-the-alta3-way". [+] Test complete [+] Cleaning cluster [+] Preparing create-and-configure-basic-pods [+] Setup complete [+] Testing create-and-configure-basic-pods + kubectl apply -f ../yaml/simpleservice.yaml pod/simpleservice created + kubectl wait --for condition=Ready --timeout 60s -f ../yaml/simpleservice.yaml pod/simpleservice condition met + kubectl apply -f ../yaml/webby-pod01.yaml pod/webservice01 created + kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-pod01.yaml pod/webservice01 condition met [+] Test complete [+] Cleaning cluster [+] Preparing persistent-configuration-with-configmaps [+] Setup complete [+] Testing persistent-configuration-with-configmaps configmap/nginx-base-conf created configmap/index-html-zork created configmap/nineteen-eighty-four created pod/nginx-configured created pod/nginx-configured condition met NAME READY STATUS RESTARTS AGE nginx-configured 1/1 Running 0 2s [+] Test complete [+] Cleaning cluster [+] Preparing create-and-consume-secrets [+] Setup complete [+] Testing create-and-consume-secrets + kubectl apply -f ../yaml/mysql-secret.yaml secret/mysql-secret created + kubectl apply -f ../yaml/mysql-locked.yaml pod/mysql-locked created + kubectl wait --for condition=Ready --timeout 30s pod/mysql-locked pod/mysql-locked condition met + kubectl get pod mysql-locked NAME READY STATUS RESTARTS AGE mysql-locked 1/1 Running 0 2s + kubectl get secrets mysql-secret NAME TYPE DATA AGE mysql-secret kubernetes.io/basic-auth 1 2s [+] Test complete [+] Cleaning cluster [+] Preparing init-containers [+] Setup complete [+] Testing init-containers + kubectl apply -f ../yaml/init-cont-pod.yaml pod/myapp-pod created + kubectl apply -f ../yaml/myservice.yaml service/myservice created + kubectl apply -f ../yaml/mydb.yaml service/mydb created + kubectl wait --for condition=Ready --timeout 30s -f ../yaml/init-cont-pod.yaml pod/myapp-pod condition met + kubectl get pods NAME READY STATUS RESTARTS AGE badpod 1/1 Terminating 1 (16s ago) 62s myapp-pod 1/1 Running 0 14s mysql-locked 1/1 Terminating 0 18s simpleservice 1/1 Terminating 0 29s [+] Test complete [+] Cleaning cluster [+] Preparing examining-resources-with-kubectl-describe [+] Setup complete [+] Testing examining-resources-with-kubectl-describe + kubectl run --port=8888 --image=registry.gitlab.com/alta3/webby webweb pod/webweb created + kubectl wait --for condition=Ready --timeout 60s pod/webweb pod/webweb condition met + kubectl delete pod webweb --now pod "webweb" deleted + kubectl apply -f ../yaml/webweb-deploy.yaml deployment.apps/webweb created + kubectl wait --for condition=Available --timeout 60s deployment.apps/webweb deployment.apps/webweb condition met [+] Test complete [+] Cleaning cluster [+] Preparing understanding-labels-and-selectors [+] Setup complete [+] Testing understanding-labels-and-selectors + kubectl apply -f ../yaml/nginx-pod.yaml pod/nginx created + kubectl wait --for condition=Ready --timeout 30s pod/nginx pod/nginx condition met + kubectl apply -f ../yaml/nginx-obj.yaml deployment.apps/nginx-obj-create created + kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx-obj-create deployment.apps/nginx-obj-create condition met [+] Test complete ```