alta3 / kubernetes-the-alta3-way

The greatest k8s installer on the planet!
223 stars 34 forks source link

Upgrade to kubernetes v1.27 #72

Closed bryfry closed 1 year ago

bryfry commented 1 year ago

Kubernetes v1.27: Chill Vibes

Supporting component releases

cni_version: "1.3.0"         # https://github.com/containernetworking/plugins/releases              
containerd_version: "1.7.6"  # https://github.com/containerd/containerd/releases                    
cri_tools_version: "1.27.1"  # https://github.com/kubernetes-sigs/cri-tools/releases                
cfssl_version: "1.6.4"       # https://github.com/cloudflare/cfssl/releases                         
runc_version: "1.1.9"        # https://github.com/opencontainers/runc/releases                      
coredns_version: "1.11.1"    # https://github.com/coredns/coredns/releases                          
calico_version: "3.26.3"     # https://github.com/projectcalico/calico/releases                     
helm_version: "3.13.0"       # https://github.com/helm/helm/releases                                
gvisor_version: "latest"     # https://github.com/google/gvisor/releases                             
bryfry commented 1 year ago

Via 1.27 branch

student@bchd:~/git/kubernetes-the-alta3-way$ kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
node-1   Ready    <none>   3m5s   v1.27.1
node-2   Ready    <none>   3m5s   v1.27.1
bryfry commented 1 year ago

Testing script failures:

@BicycleWalrus for awareness

horizontal-scaling-with-kubectl-scale

[+] Preparing horizontal-scaling-with-kubectl-scale    
[+] Setup complete                                 
[+] Testing horizontal-scaling-with-kubectl-scale                                                     
deployment.apps/webservice created                                                                    
Error from server (NotFound): deployments.apps "sise-deploy" not found               
error: no objects passed to scale                                                                     
deployment.apps/webservice unchanged                                                                  
error: the path "deployments/webby-deploy" does not exist                            
NAME         READY   UP-TO-DATE   AVAILABLE   AGE                                                     
webservice   0/3     3            0           0s   
Error from server (NotFound): deployments.apps "sise-deploy" not found                                
[-] Test failed

Continued observance of known issue: https://github.com/alta3/kubernetes-the-alta3-way/issues/38

BicycleWalrus commented 1 year ago

The lab has seen significant changes, and has changed quite a bit since that test script was probably written. Is there an environment running 1.27 I can jump on to test this? I suspect it'll work just fine, but I'll take a look at it.

bryfry commented 1 year ago

@BicycleWalrus checking out 1.27 is available and ready for that fix. Please update https://github.com/alta3/kubernetes-the-alta3-way/issues/38 when fixed as this has been highlighted as a failing test there.

bryfry commented 1 year ago

Testing complete

Ready for review @BicycleWalrus @sfeeser

Notes

Testing commands:

{
  tl create-and-configure-a-replicaset
  tl writing-a-deployment-manifest
  tl persistent-configuration-with-configmaps
  tl storage-internal-oot-csi
  tl fluentd
  tl hostnames-fqdn-lab
  tl examining-resources-with-kubectl-describe
  tl kubectl-top
  tl patching
  tl horizontal-scaling-with-kubectl-scale
  tl create-and-configure-basic-pods
  tl strategies
  tl multi-container-pod-design
  tl resources-and-scheduling
  tl install-coredns-lab
  tl livenessprobes-and-readinessprobes
  tl deploy-a-networkpolicy
  tl revert-coredns-to-kubedns
  tl RBAC-authentication-authorization-lab
  tl taints-and-tolerations
  tl isolating-resources-with-kubernetes-namespaces
  tl autoscaling-challenge
  tl listing-resources-with-kubectl-get
  tl cluster-access-with-kubernetes-context
  tl host_networking
  tl setting-an-applications-resource-requirements
  tl init-containers
  tl understanding-labels-and-selectors
  tl admission-controller
  tl rolling-updates-and-rollbacks
  tl expose-service-via-ingress
  tl configure-coredns-lab
  tl advanced-logging
  tl create-and-consume-secrets
  tl autoscaling
  tl exposing-a-service 
} | tee $(date -I)_tl.log
Log output

[+] Cleaning cluster
[+] Preparing create-and-configure-a-replicaset
[+] Setup complete
[+] Testing create-and-configure-a-replicaset
[+] Test complete
[+] Cleaning cluster
[+] Preparing writing-a-deployment-manifest
[+] Setup complete
[+] Testing writing-a-deployment-manifest
[+] Test complete
[+] Cleaning cluster
[+] Preparing persistent-configuration-with-configmaps
[+] Setup complete
[+] Testing persistent-configuration-with-configmaps
configmap/nginx-base-conf created
configmap/index-html-zork created
configmap/nineteen-eighty-four created
pod/nginx-configured created
pod/nginx-configured condition met
NAME               READY   STATUS    RESTARTS   AGE
nginx-configured   1/1     Running   0          1s
[+] Test complete
[+] Cleaning cluster
[+] Preparing storage-internal-oot-csi
[+] Setup complete
[+] Testing storage-internal-oot-csi
[+] Test complete
[+] Cleaning cluster
[+] Preparing fluentd
[+] Setup complete
[+] Testing fluentd
+ kubectl apply -f ../yaml/fluentd-conf.yaml
configmap/fluentd-config created
+ kubectl apply -f ../yaml/fluentd-pod.yaml
pod/logger created
+ kubectl wait --for condition=Ready --timeout 30s pod/logger
pod/logger condition met
+ kubectl apply -f -
++ hostname -f
+ BCHD_IP=bchd.7d496200-d853-4b04-8207-556d672af64c
+ j2 ../yaml/http_fluentd_config.yaml
configmap/fluentd-config configured
+ kubectl apply -f ../yaml/http_fluentd.yaml
pod/http-logger created
+ kubectl wait --for condition=Ready --timeout 30s pod/http-logger
pod/http-logger condition met
+ kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
http-logger   2/2     Running   0          1s
logger        2/2     Running   0          3s
[+] Test complete
[+] Cleaning cluster
[+] Preparing hostnames-fqdn-lab
[+] Setup complete
[+] Testing hostnames-fqdn-lab
[+] Test complete
[+] Cleaning cluster
[+] Preparing examining-resources-with-kubectl-describe
[+] Setup complete
[+] Testing examining-resources-with-kubectl-describe
+ kubectl run --port=8888 --image=registry.gitlab.com/alta3/webby webweb
pod/webweb created
+ kubectl wait --for condition=Ready --timeout 60s pod/webweb
pod/webweb condition met
+ kubectl delete pod webweb --now
pod "webweb" deleted
+ kubectl apply -f ../yaml/webweb-deploy.yaml
deployment.apps/webweb created
+ kubectl wait --for condition=Available --timeout 60s deployment.apps/webweb
deployment.apps/webweb condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing kubectl-top
[+] Setup complete
[+] Testing kubectl-top
+ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
+ kubectl -n kube-system wait --for condition=Available --timeout 100s deployment.apps/metrics-server
deployment.apps/metrics-server condition met
+ kubectl -n kube-system wait --for condition=Available --timeout 30s apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io condition met
+ kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
node-1   373m         18%    1335Mi          35%       
node-2   118m         5%     1229Mi          32%       
+ kubectl top pods --all-namespaces
NAMESPACE     NAME                                      CPU(cores)   MEMORY(bytes)   
kube-system   calico-kube-controllers-98b6bd5fb-dzbc8   6m           15Mi            
kube-system   calico-node-9j859                         52m          74Mi            
kube-system   calico-node-hvzt7                         49m          74Mi            
kube-system   kube-dns-54f8b57758-lttz7                 2m           18Mi            
kube-system   metrics-server-5d875656f5-nrxtp           4m           11Mi            
[+] Test complete
[+] Cleaning cluster
[+] Preparing patching
[+] Setup complete
[+] Testing patching
[+] Test complete
[+] Cleaning cluster
[+] Preparing horizontal-scaling-with-kubectl-scale
[+] Setup complete
[+] Testing horizontal-scaling-with-kubectl-scale
deployment.apps/webservice created
Error from server (NotFound): deployments.apps "sise-deploy" not found
error: no objects passed to scale
deployment.apps/webservice unchanged
error: the path "deployments/webby-deploy" does not exist
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
webservice   0/3     3            0           0s
Error from server (NotFound): deployments.apps "sise-deploy" not found
[-] Test failed
[+] Cleaning cluster
[+] Preparing create-and-configure-basic-pods
[+] Setup complete
[+] Testing create-and-configure-basic-pods
+ kubectl apply -f ../yaml/simpleservice.yaml
pod/simpleservice created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/simpleservice.yaml
pod/simpleservice condition met
+ kubectl apply -f ../yaml/webby-pod01.yaml
pod/webservice01 created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-pod01.yaml
pod/webservice01 condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing strategies
[+] Setup complete
[+] Testing strategies
[+] Test complete
[+] Cleaning cluster
[+] Preparing multi-container-pod-design
[+] Setup complete
[+] Testing multi-container-pod-design
+ kubectl apply -f ../yaml/netgrabber.yaml
pod/netgrab created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/netgrabber.yaml
pod/netgrab condition met
+ kubectl exec netgrab -c busybox -- sh -c 'ping 8.8.8.8 -c 1'
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=59 time=8.623 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 8.623/8.623/8.623 ms
+ kubectl apply -f ../yaml/nginx-conf.yaml
configmap/nginx-conf created
+ kubectl apply -f ../yaml/webby-nginx-combo.yaml
pod/webby-nginx-combo created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-nginx-combo.yaml
pod/webby-nginx-combo condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing resources-and-scheduling
[+] Setup complete
[+] Testing resources-and-scheduling
+ kubectl apply -f ../yaml/dev-rq.yaml --namespace=dev
resourcequota/dev-resourcequota created
+ kubectl apply -f ../yaml/resourced_deploy.yml
deployment.apps/resourced-deploy created
+ kubectl wait --for condition=Available --timeout 60s --namespace=dev deployments.apps/resourced-deploy
deployment.apps/resourced-deploy condition met
+ kubectl get -f ../yaml/resourced_deploy.yml
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
resourced-deploy   4/5     4            4           11s
[+] Test complete
[+] Cleaning cluster
[+] Preparing install-coredns-lab
[+] Setup complete
[+] Testing install-coredns-lab
[+] Test complete
[+] Cleaning cluster
[+] Preparing livenessprobes-and-readinessprobes
[+] Setup complete
[+] Testing livenessprobes-and-readinessprobes
+ kubectl apply -f ../yaml/badpod.yaml
pod/badpod created
+ kubectl wait --for condition=Ready --timeout 30s pod/badpod
pod/badpod condition met
+ kubectl apply -f ../yaml/sise-lp.yaml
pod/sise-lp created
+ kubectl wait --for condition=Ready --timeout 30s pod/sise-lp
pod/sise-lp condition met
+ kubectl apply -f ../yaml/sise-rp.yaml
pod/sise-rp created
+ kubectl wait --for condition=Ready --timeout 30s pod/sise-rp
pod/sise-rp condition met
+ kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
badpod    1/1     Running   0          33s
sise-lp   1/1     Running   0          15s
sise-rp   1/1     Running   0          13s
[+] Test complete
[+] Cleaning cluster
[+] Preparing deploy-a-networkpolicy
[+] Setup complete
[+] Testing deploy-a-networkpolicy
[+] Test complete
[+] Cleaning cluster
[+] Preparing revert-coredns-to-kubedns
[+] Setup complete
[+] Testing revert-coredns-to-kubedns
[+] Test complete
[+] Cleaning cluster
[+] Preparing RBAC-authentication-authorization-lab
[+] Setup complete
[+] Testing RBAC-authentication-authorization-lab
+ kubectl apply -f ../yaml/t3-support.yaml
role.rbac.authorization.k8s.io/t3-support created
+ kubectl apply -f ../yaml/alice-csr.yaml
certificatesigningrequest.certificates.k8s.io/alice created
+ kubectl certificate approve alice
certificatesigningrequest.certificates.k8s.io/alice approved
+ kubectl apply -f ../yaml/t3-support-binding.yaml
rolebinding.rbac.authorization.k8s.io/t3-support created
[+] Test complete
[+] Cleaning cluster
[+] Preparing taints-and-tolerations
[+] Setup complete
[+] Testing taints-and-tolerations
+ kubectl apply -f ../yaml/tnt01.yaml
deployment.apps/nginx created
+ kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx
deployment.apps/nginx condition met
+ kubectl delete -f ../yaml/tnt01.yaml
deployment.apps "nginx" deleted
+ kubectl taint nodes node-1 trying_taints=yessir:NoSchedule
node/node-1 tainted
+ kubectl apply -f ../yaml/tnt01.yaml
deployment.apps/nginx created
+ kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx
deployment.apps/nginx condition met
+ kubectl apply -f ../yaml/tnt02.yaml
deployment.apps/nginx-tolerated created
+ kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx-tolerated
deployment.apps/nginx-tolerated condition met
+ kubectl get pods -o wide
NAME                               READY   STATUS              RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
badpod                             1/1     Terminating         0          60s   192.168.247.12   node-2              
nginx-6f6c489777-29q7w             1/1     Running             0          8s    192.168.247.23   node-2              
nginx-6f6c489777-6b6kb             1/1     Running             0          8s    192.168.247.18   node-2              
nginx-6f6c489777-94nlp             1/1     Running             0          8s    192.168.247.22   node-2              
nginx-6f6c489777-9rz6j             1/1     Running             0          8s    192.168.247.21   node-2              
nginx-6f6c489777-hcfv8             1/1     Running             0          8s    192.168.247.25   node-2              
nginx-6f6c489777-khhsk             1/1     Running             0          8s    192.168.247.19   node-2              
nginx-6f6c489777-pnbll             1/1     Running             0          8s    192.168.247.27   node-2              
nginx-6f6c489777-stvtz             1/1     Running             0          8s    192.168.247.24   node-2              
nginx-6f6c489777-tdjfb             1/1     Running             0          8s    192.168.247.20   node-2              
nginx-6f6c489777-zgcc2             1/1     Running             0          8s    192.168.247.26   node-2              
nginx-tolerated-64698bb964-22t22   1/1     Running             0          2s    192.168.84.159   node-1              
nginx-tolerated-64698bb964-7smgr   1/1     Running             0          2s    192.168.247.31   node-2              
nginx-tolerated-64698bb964-8k2mq   1/1     Running             0          2s    192.168.247.28   node-2              
nginx-tolerated-64698bb964-bsmr6   1/1     Running             0          2s    192.168.84.157   node-1              
nginx-tolerated-64698bb964-cgpxj   1/1     Running             0          2s    192.168.84.158   node-1              
nginx-tolerated-64698bb964-dw5gz   1/1     Running             0          2s    192.168.84.155   node-1              
nginx-tolerated-64698bb964-gbbzt   1/1     Running             0          2s    192.168.84.156   node-1              
nginx-tolerated-64698bb964-gdrlh   0/1     ContainerCreating   0          2s               node-2              
nginx-tolerated-64698bb964-ktnwr   0/1     ContainerCreating   0          2s               node-2              
nginx-tolerated-64698bb964-mflrh   1/1     Running             0          2s    192.168.84.154   node-1              
sise-lp                            1/1     Terminating         0          42s   192.168.84.147   node-1              
sise-rp                            1/1     Terminating         0          40s   192.168.84.148   node-1              
[+] Test complete
[+] Cleaning cluster
[+] Preparing isolating-resources-with-kubernetes-namespaces
[+] Setup complete
[+] Testing isolating-resources-with-kubernetes-namespaces
+ kubectl apply -f ../yaml/test-ns.yaml
namespace/test created
+ kubectl apply -f ../yaml/dev-ns.yaml
namespace/dev created
+ kubectl apply -f ../yaml/prod-ns.yaml
namespace/prod created
+ kubectl apply -f ../yaml/test-rq.yaml --namespace=test
resourcequota/test-resourcequota created
+ kubectl apply -f ../yaml/prod-rq.yaml --namespace=prod
resourcequota/prod-resourcequota created
+ kubectl apply -f ../yaml/prod-rq.yaml --namespace=prod
resourcequota/prod-resourcequota unchanged
+ kubectl get namespaces dev prod test
NAME   STATUS   AGE
dev    Active   2s
prod   Active   1s
test   Active   2s
[+] Test complete
[+] Cleaning cluster
[+] Preparing autoscaling-challenge
[+] Setup complete
[+] Testing autoscaling-challenge
[+] Test complete
[+] Cleaning cluster
[+] Preparing listing-resources-with-kubectl-get
[+] Setup complete
[+] Testing listing-resources-with-kubectl-get
+ kubectl get services -A
NAMESPACE     NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   172.16.3.1            443/TCP         14m
kube-system   kube-dns     ClusterIP   172.16.3.10           53/UDP,53/TCP   11m
+ kubectl get deployments.apps -A
NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   calico-kube-controllers   1/1     1            1           12m
kube-system   kube-dns                  1/1     1            1           11m
+ kubectl get secrets
No resources found in default namespace.
[+] Test complete
[+] Cleaning cluster
[+] Preparing cluster-access-with-kubernetes-context
[+] Setup complete
[+] Testing cluster-access-with-kubernetes-context
+ kubectl config use-context kubernetes-the-alta3-way
Switched to context "kubernetes-the-alta3-way".
+ kubectl config set-context dev-context --namespace=dev
Context "dev-context" created.
+ kubectl config use-context dev-context
Switched to context "dev-context".
+ kubectl config set-context dev-context --namespace=dev --user=admin --cluster=kubernetes-the-alta3-way
Context "dev-context" modified.
+ kubectl config get-contexts
CURRENT   NAME                       CLUSTER                    AUTHINFO   NAMESPACE
*         dev-context                kubernetes-the-alta3-way   admin      dev
          kubernetes-the-alta3-way   kubernetes-the-alta3-way   admin      
+ kubectl config set-context kubernetes-the-alta3-way --namespace=default
Context "kubernetes-the-alta3-way" modified.
+ kubectl config use-context kubernetes-the-alta3-way
Switched to context "kubernetes-the-alta3-way".
[+] Test complete
[+] Cleaning cluster
[+] Preparing host_networking
[+] Setup complete
[+] Testing host_networking
[+] Test complete
[+] Cleaning cluster
[+] Preparing setting-an-applications-resource-requirements
[+] Setup complete
[+] Testing setting-an-applications-resource-requirements
+ kubectl apply -f ../yaml/linux-pod-r.yaml
pod/linux-pod-r created
+ kubectl wait --for condition=Ready --timeout 30s pod/linux-pod-r
pod/linux-pod-r condition met
+ kubectl apply -f ../yaml/linux-pod-rl.yaml
pod/linux-pod-rl created
+ kubectl wait --for condition=Ready --timeout 30s pod/linux-pod-rl
pod/linux-pod-rl condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing init-containers
[+] Setup complete
[+] Testing init-containers
+ kubectl apply -f ../yaml/init-cont-pod.yaml
pod/myapp-pod created
+ kubectl apply -f ../yaml/myservice.yaml
service/myservice created
+ kubectl apply -f ../yaml/mydb.yaml
service/mydb created
+ kubectl wait --for condition=Ready --timeout 30s -f ../yaml/init-cont-pod.yaml
pod/myapp-pod condition met
+ kubectl get pods
NAME           READY   STATUS        RESTARTS   AGE
linux-pod-r    1/1     Terminating   0          26s
linux-pod-rl   1/1     Terminating   0          21s
myapp-pod      1/1     Running       0          13s
[+] Test complete
[+] Cleaning cluster
[+] Preparing understanding-labels-and-selectors
[+] Setup complete
[+] Testing understanding-labels-and-selectors
+ kubectl apply -f ../yaml/nginx-pod.yaml
pod/nginx created
+ kubectl wait --for condition=Ready --timeout 30s pod/nginx
pod/nginx condition met
+ kubectl apply -f ../yaml/nginx-obj.yaml
deployment.apps/nginx-obj-create created
+ kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx-obj-create
deployment.apps/nginx-obj-create condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing admission-controller
[+] Setup complete
[+] Testing admission-controller
+ kubectl run no-lr --image=nginx:1.19.6
pod/no-lr created
+ kubectl wait --for condition=Ready --timeout 30s pod/no-lr
pod/no-lr condition met
+ kubectl apply -f ../yaml/lim-ran.yml
limitrange/mem-limit-range created
+ kubectl get limitrange
NAME              CREATED AT
mem-limit-range   2023-10-09T16:40:55Z
[+] Test complete
[+] Cleaning cluster
[+] Preparing rolling-updates-and-rollbacks
[+] Setup complete
[+] Testing rolling-updates-and-rollbacks
[+] Test complete
[+] Cleaning cluster
[+] Preparing expose-service-via-ingress
[+] Setup complete
[+] Testing expose-service-via-ingress
[+] Test complete
[+] Cleaning cluster
[+] Preparing configure-coredns-lab
[+] Setup complete
[+] Testing configure-coredns-lab
[+] Test complete
[+] Cleaning cluster
[+] Preparing advanced-logging
[+] Setup complete
[+] Testing advanced-logging
+ kubectl apply -f ../yaml/counter-pod.yaml
pod/counter created
+ kubectl wait --for condition=Ready --timeout 30s pod/counter
pod/counter condition met
+ kubectl apply -f ../yaml/two-pack.yaml
deployment.apps/two-pack created
+ kubectl wait --for condition=Available --timeout 30s deployment.apps/two-pack
deployment.apps/two-pack condition met
+ kubectl apply -f ../yaml/nginx-rsyslog-pod.yaml
pod/nginx-rsyslog-pod created
+ kubectl wait --for condition=Ready --timeout 30s pod/nginx-rsyslog-pod
pod/nginx-rsyslog-pod condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing create-and-consume-secrets
[+] Setup complete
[+] Testing create-and-consume-secrets
+ kubectl apply -f ../yaml/mysql-secret.yaml
secret/mysql-secret created
+ kubectl apply -f ../yaml/mysql-locked.yaml
pod/mysql-locked created
+ kubectl wait --for condition=Ready --timeout 30s pod/mysql-locked
pod/mysql-locked condition met
+ kubectl get pod mysql-locked
NAME           READY   STATUS    RESTARTS   AGE
mysql-locked   1/1     Running   0          13s
+ kubectl get secrets mysql-secret
NAME           TYPE                       DATA   AGE
mysql-secret   kubernetes.io/basic-auth   1      14s
[+] Test complete
[+] Cleaning cluster
[+] Preparing autoscaling
[+] Setup complete
[+] Testing autoscaling
[+] Test complete
[+] Cleaning cluster
[+] Preparing exposing-a-service
[+] Setup complete
[+] Testing exposing-a-service
[+] Test complete