alta3 / kubernetes-the-alta3-way

The greatest k8s installer on the planet!
223 stars 34 forks source link

Upgrade to kubernetes v1.28 #83

Closed bryfry closed 8 months ago

bryfry commented 1 year ago

Kubernetes v1.28: Planternetes

Supporting component releases

k8s_version: "1.28.7"        # https://kubernetes.io/releases/#release-v1-28
etcd_version: "3.5.12"       # https://github.com/etcd-io/etcd/releases
cni_version: "1.4.1"         # https://github.com/containernetworking/plugins/releases 
containerd_version: "1.7.14" # https://github.com/containerd/containerd/releases
cri_tools_version: "1.28.0"  # https://github.com/kubernetes-sigs/cri-tools/releases
cfssl_version: "1.6.5"       # https://github.com/cloudflare/cfssl/releases
runc_version: "1.1.9"        # https://github.com/opencontainers/runc/releases
coredns_version: "1.11.12"   # https://github.com/coredns/coredns/releases
calico_version: "3.27.2"     # https://github.com/projectcalico/calico/releases
helm_version: "3.14.3"       # https://github.com/helm/helm/releases
gvisor_version: "latest"     # https://github.com/google/gvisor/releases
bryfry commented 1 year ago

Observed error on controller kube-scheduler service:

Oct 09 17:33:54 controller kube-scheduler[4179]: I1009 17:33:54.739415    4179 serving.go:348] Generated self-signed cert in-memory
Oct 09 17:33:54 controller kube-scheduler[4179]: E1009 17:33:54.739868    4179 run.go:74] "command failed" err="no kind \"KubeSchedulerConfiguration\" is registered for version \"kubescheduler.config.k8s.io/v1beta2\" in scheme \"pkg/scheduler/apis/config/scheme/scheme.go:30\""
Oct 09 17:33:54 controller systemd[1]: kube-scheduler.service: Main process exited, code=exited, status=1/FAIL

Reason:

kube-scheduler component config (KubeSchedulerConfiguration) kubescheduler.config.k8s.io/v1beta2 is removed in v1.28. Migrate kube-scheduler configuration files to kubescheduler.config.k8s.io/v1. (https://github.com/kubernetes/kubernetes/pull/117649, @SataQiu)

From 1.28 Release Notes

Fix:

https://github.com/alta3/kubernetes-the-alta3-way/commit/447173e79eb581e4b36fe8bd00ca27a077ca8794

bryfry commented 1 year ago

Observed errors on node kubelet service:

Oct 09 17:48:07 node-1 kubelet[5887]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Oct 09 17:48:07 node-1 kubelet[5887]: Flag --register-node has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Oct 09 17:48:07 node-1 kubelet[5887]: E1009 17:48:07.748526    5887 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/kubelet-config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/kubelet-config.yaml, error failed to decode: no kind \"KubeletConfiguration\" is registered for version \"kubelet.config.k8s.io/v1\" in scheme \"pkg/kubelet/apis/config/scheme/scheme.go:33\""

Fix:

https://github.com/alta3/kubernetes-the-alta3-way/commit/ab7762044813f2432809518e4cce06322d7321d3

bryfry commented 1 year ago
kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
node-1   Ready    <none>   57s   v1.28.2
node-2   Ready    <none>   57s   v1.28.2
JoeSpizz commented 8 months ago

@bryfry I spoke with Tim and Stu and I think we are in a good position. This update looks like it successfully pushes us up to 1.28.2 which is technically "good enough" for Tim. We've run the smokescreen tests and that as well.

My thought after looking through this was that I could do the work to upgrade us to 1.28.7 on my own... maybe? It seems like we are iterating version of various modules and then testing if everything runs smoothly with those upgrades. What do you think?

JoeSpizz commented 8 months ago

Testing release with:


{
tl resources-and-scheduling
tl setting-an-applications-resource-requirements
tl cluster-access-with-kubernetes-context
tl hostnames-fqdn-lab
tl understanding-labels-and-selectors
tl autoscaling-challenge
tl expose-service-via-ingress
tl revert-coredns-to-kubedns
tl multi-container-pod-design
tl storage-internal-oot-csi
tl strategies
tl create-and-consume-secrets
tl isolating-resources-with-kubernetes-namespaces
tl deploy-a-networkpolicy
tl writing-a-deployment-manifest
tl exposing-a-service
tl host_networking
tl create-and-configure-a-replicaset
tl listing-resources-with-kubectl-get
tl kubectl-top
tl rolling-updates-and-rollbacks
tl advanced-logging
tl examining-resources-with-kubectl-describe
tl patching
tl install-coredns-lab
tl admission-controller
tl livenessprobes-and-readinessprobes
tl autoscaling
tl taints-and-tolerations
tl create-and-configure-basic-pods
tl horizontal-scaling-with-kubectl-scale
tl init-containers
tl fluentd
tl persistent-configuration-with-configmaps
tl configure-coredns-lab
tl RBAC-authentication-authorization-lab
} | tee $(date -I)_tl.log
JoeSpizz commented 8 months ago

@BicycleWalrus For your awareness, these manifests were added since the last release and were not included in any test.

/labs/yaml/awx-ansible.yaml
/labs/yaml/awx-pv.yaml
/labs/yaml/awx-pvc.yaml
/labs/yaml/awx-sc.yaml
/labs/yaml/awx-values.yaml
/labs/yaml/cm-pod.yaml
/labs/yaml/cm-vars.yaml
/labs/yaml/ingress-tls-secret.yaml
/labs/yaml/my-ssh-secret.yaml

These can be confirmed as new here

JoeSpizz commented 8 months ago

Failed test during testing:

+ kubectl apply -f ../yaml/webby-pod01.yaml
pod/webservice01 created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-pod01.yaml
pod/webservice01 condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing horizontal-scaling-with-kubectl-scale
[+] Setup complete
[+] Testing horizontal-scaling-with-kubectl-scale
deployment.apps/webservice created
Error from server (NotFound): deployments.apps "sise-deploy" not found
error: no objects passed to scale
deployment.apps/webservice unchanged
error: the path "deployments/webby-deploy" does not exist
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
webservice   0/3     3            0           0s
Error from server (NotFound): deployments.apps "sise-deploy" not found
[-] Test failed

@BicycleWalrus for awareness.

This is a known error that has been documented in previous upgrades:

Assigned Issue for Fix - https://github.com/alta3/kubernetes-the-alta3-way/issues/38 1.25 - https://github.com/alta3/kubernetes-the-alta3-way/issues/26#issuecomment-1347688615 1.26 - https://github.com/alta3/kubernetes-the-alta3-way/issues/37#issuecomment-1552368558 1.27 - https://github.com/alta3/kubernetes-the-alta3-way/issues/72#issuecomment-1753333543

BicycleWalrus commented 8 months ago

Failed test during testing:

+ kubectl apply -f ../yaml/webby-pod01.yaml
pod/webservice01 created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-pod01.yaml
pod/webservice01 condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing horizontal-scaling-with-kubectl-scale
[+] Setup complete
[+] Testing horizontal-scaling-with-kubectl-scale
deployment.apps/webservice created
Error from server (NotFound): deployments.apps "sise-deploy" not found
error: no objects passed to scale
deployment.apps/webservice unchanged
error: the path "deployments/webby-deploy" does not exist
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
webservice   0/3     3            0           0s
Error from server (NotFound): deployments.apps "sise-deploy" not found
[-] Test failed

@BicycleWalrus for awareness.

This is a known error that has been documented in previous upgrades:

Assigned Issue for Fix - #38 1.25 - #26 (comment) 1.26 - #37 (comment) 1.27 - #72 (comment)

That lab is OBE and will certainly fail. You can safely remove that from your smoke test.

BicycleWalrus commented 8 months ago

@BicycleWalrus For your awareness, these manifests were added since the last release and were not included in any test.

/labs/yaml/awx-ansible.yaml
/labs/yaml/awx-pv.yaml
/labs/yaml/awx-pvc.yaml
/labs/yaml/awx-sc.yaml
/labs/yaml/awx-values.yaml
/labs/yaml/cm-pod.yaml
/labs/yaml/cm-vars.yaml
/labs/yaml/ingress-tls-secret.yaml
/labs/yaml/my-ssh-secret.yaml

Those are new, and were developed for a one-off lab Zach needed for Disney. Those use the same resources found in other labs; other than AWX CRDs, which we don't need to test.

JoeSpizz commented 8 months ago
calico_version: "3.27.2"     # https://github.com/projectcalico/calico/releases

Alta3's Calico jinja Calico's Official yaml

Scroll to manifest/calico.yaml to see what changed:

Documentation for how we came up with these steps is here.

Calico Upgrade

Commits to Calico manifest indicate that only container version numbers have updated. No expected changes to Kubernetes-the-Alta3-Way. :tada:

BicycleWalrus commented 8 months ago

Discussed testing solution with @sfeeser, and I really liked what we came up with.

Essentially, we can get some spizzware that would read a code block like this

<!-- Testing Start
kubectl do something
kubectl do something
kubectl do something
-->

The parser will grab these commands, run them. If RC = 0, we have successful test, otherwise, report the failure and error message.

Then we'll tear it down, and prepare for the next test. Feel free to refine this design, but essentially, that's what I'd like to be able to do.

We'd then only setup tests on certain labs which exercises pertinent resources. GTG otherwise!

sfeeser commented 8 months ago

OK, I like what Tim is suggesting here, however, it would look like this:

Here is what I like about this idea:

  1. It keep developers focused on the content.md page rather than digging into test scripts
  2. Zero maintenance on the test.sh scripts in k8s-the-alta3-way, all tests would be inside smoketest.sh
  3. It will be easier for Spizz to parse all the k8s courses for <!--- TESTING START ---> <!--- TESTING END ---> markers, so there would be no guessing on which labs contain tests.
  4. If will be easy to create a smoketest,sh script in k8s-the-alta3.way too.
  5. In fact I can't think of a downside to this, so am I missing something?

@bryfry

bryfry commented 8 months ago

Considered OBE given existing testing scripts

JoeSpizz commented 8 months ago

Testing release with:

{
tl resources-and-scheduling
tl setting-an-applications-resource-requirements
tl cluster-access-with-kubernetes-context
tl hostnames-fqdn-lab
tl understanding-labels-and-selectors
tl autoscaling-challenge
tl expose-service-via-ingress
tl revert-coredns-to-kubedns
tl multi-container-pod-design
tl storage-internal-oot-csi
tl strategies
tl create-and-consume-secrets
tl isolating-resources-with-kubernetes-namespaces
tl deploy-a-networkpolicy
tl writing-a-deployment-manifest
tl exposing-a-service
tl host_networking
tl create-and-configure-a-replicaset
tl listing-resources-with-kubectl-get
tl kubectl-top
tl rolling-updates-and-rollbacks
tl advanced-logging
tl examining-resources-with-kubectl-describe
tl patching
tl install-coredns-lab
tl admission-controller
tl livenessprobes-and-readinessprobes
tl autoscaling
tl taints-and-tolerations
tl create-and-configure-basic-pods
tl horizontal-scaling-with-kubectl-scale
tl init-containers
tl fluentd
tl persistent-configuration-with-configmaps
tl configure-coredns-lab
tl RBAC-authentication-authorization-lab
} | tee $(date -I)_tl.log
Test Log
[+] Cleaning cluster
[+] Preparing resources-and-scheduling
[+] Setup complete
[+] Testing resources-and-scheduling
+ kubectl apply -f ../yaml/dev-rq.yaml --namespace=dev
resourcequota/dev-resourcequota created
+ kubectl apply -f ../yaml/resourced_deploy.yml
deployment.apps/resourced-deploy created
+ kubectl wait --for condition=Available --timeout 60s --namespace=dev deployments.apps/resourced-deploy
deployment.apps/resourced-deploy condition met
+ kubectl get -f ../yaml/resourced_deploy.yml
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
resourced-deploy   4/5     4            4           10s
[+] Test complete
[+] Cleaning cluster
[+] Preparing setting-an-applications-resource-requirements
[+] Setup complete
[+] Testing setting-an-applications-resource-requirements
+ kubectl apply -f ../yaml/linux-pod-r.yaml
pod/linux-pod-r created
+ kubectl wait --for condition=Ready --timeout 30s pod/linux-pod-r
pod/linux-pod-r condition met
+ kubectl apply -f ../yaml/linux-pod-rl.yaml
pod/linux-pod-rl created
+ kubectl wait --for condition=Ready --timeout 30s pod/linux-pod-rl
pod/linux-pod-rl condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing cluster-access-with-kubernetes-context
[+] Setup complete
[+] Testing cluster-access-with-kubernetes-context
+ kubectl config use-context kubernetes-the-alta3-way
Switched to context "kubernetes-the-alta3-way".
+ kubectl config set-context dev-context --namespace=dev
Context "dev-context" created.
+ kubectl config use-context dev-context
Switched to context "dev-context".
+ kubectl config set-context dev-context --namespace=dev --user=admin --cluster=kubernetes-the-alta3-way
Context "dev-context" modified.
+ kubectl config get-contexts
CURRENT   NAME                       CLUSTER                    AUTHINFO   NAMESPACE
*         dev-context                kubernetes-the-alta3-way   admin      dev
          kubernetes-the-alta3-way   kubernetes-the-alta3-way   admin      
+ kubectl config set-context kubernetes-the-alta3-way --namespace=default
Context "kubernetes-the-alta3-way" modified.
+ kubectl config use-context kubernetes-the-alta3-way
Switched to context "kubernetes-the-alta3-way".
[+] Test complete
[+] Cleaning cluster
[+] Preparing hostnames-fqdn-lab
[+] Setup complete
[+] Testing hostnames-fqdn-lab
[+] Test complete
[+] Cleaning cluster
[+] Preparing understanding-labels-and-selectors
[+] Setup complete
[+] Testing understanding-labels-and-selectors
+ kubectl apply -f ../yaml/nginx-pod.yaml
pod/nginx created
+ kubectl wait --for condition=Ready --timeout 30s pod/nginx
pod/nginx condition met
+ kubectl apply -f ../yaml/nginx-obj.yaml
deployment.apps/nginx-obj-create created
+ kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx-obj-create
deployment.apps/nginx-obj-create condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing autoscaling-challenge
[+] Setup complete
[+] Testing autoscaling-challenge
[+] Test complete
[+] Cleaning cluster
[+] Preparing expose-service-via-ingress
[+] Setup complete
[+] Testing expose-service-via-ingress
[+] Test complete
[+] Cleaning cluster
[+] Preparing revert-coredns-to-kubedns
[+] Setup complete
[+] Testing revert-coredns-to-kubedns
[+] Test complete
[+] Cleaning cluster
[+] Preparing multi-container-pod-design
[+] Setup complete
[+] Testing multi-container-pod-design
+ kubectl apply -f ../yaml/netgrabber.yaml
pod/netgrab created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/netgrabber.yaml
pod/netgrab condition met
+ kubectl exec netgrab -c busybox -- sh -c 'ping 8.8.8.8 -c 1'
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=59 time=6.864 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 6.864/6.864/6.864 ms
+ kubectl apply -f ../yaml/nginx-conf.yaml
configmap/nginx-conf created
+ kubectl apply -f ../yaml/webby-nginx-combo.yaml
pod/webby-nginx-combo created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-nginx-combo.yaml
pod/webby-nginx-combo condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing storage-internal-oot-csi
[+] Setup complete
[+] Testing storage-internal-oot-csi
[+] Test complete
[+] Cleaning cluster
[+] Preparing strategies
[+] Setup complete
[+] Testing strategies
[+] Test complete
[+] Cleaning cluster
[+] Preparing create-and-consume-secrets
[+] Setup complete
[+] Testing create-and-consume-secrets
+ kubectl apply -f ../yaml/mysql-secret.yaml
secret/mysql-secret created
+ kubectl apply -f ../yaml/mysql-locked.yaml
pod/mysql-locked created
+ kubectl wait --for condition=Ready --timeout 30s pod/mysql-locked
pod/mysql-locked condition met
+ kubectl get pod mysql-locked
NAME           READY   STATUS    RESTARTS   AGE
mysql-locked   1/1     Running   0          13s
+ kubectl get secrets mysql-secret
NAME           TYPE                       DATA   AGE
mysql-secret   kubernetes.io/basic-auth   1      13s
[+] Test complete
[+] Cleaning cluster
[+] Preparing isolating-resources-with-kubernetes-namespaces
[+] Setup complete
[+] Testing isolating-resources-with-kubernetes-namespaces
+ kubectl apply -f ../yaml/test-ns.yaml
namespace/test created
+ kubectl apply -f ../yaml/dev-ns.yaml
namespace/dev created
+ kubectl apply -f ../yaml/prod-ns.yaml
namespace/prod created
+ kubectl apply -f ../yaml/test-rq.yaml --namespace=test
resourcequota/test-resourcequota created
+ kubectl apply -f ../yaml/prod-rq.yaml --namespace=prod
resourcequota/prod-resourcequota created
+ kubectl apply -f ../yaml/prod-rq.yaml --namespace=prod
resourcequota/prod-resourcequota unchanged
+ kubectl get namespaces dev prod test
NAME   STATUS   AGE
dev    Active   1s
prod   Active   1s
test   Active   2s
[+] Test complete
[+] Cleaning cluster
[+] Preparing deploy-a-networkpolicy
[+] Setup complete
[+] Testing deploy-a-networkpolicy
[+] Test complete
[+] Cleaning cluster
[+] Preparing writing-a-deployment-manifest
[+] Setup complete
[+] Testing writing-a-deployment-manifest
[+] Test complete
[+] Cleaning cluster
[+] Preparing exposing-a-service
[+] Setup complete
[+] Testing exposing-a-service
[+] Test complete
[+] Cleaning cluster
[+] Preparing host_networking
[+] Setup complete
[+] Testing host_networking
[+] Test complete
[+] Cleaning cluster
[+] Preparing create-and-configure-a-replicaset
[+] Setup complete
[+] Testing create-and-configure-a-replicaset
[+] Test complete
[+] Cleaning cluster
[+] Preparing listing-resources-with-kubectl-get
[+] Setup complete
[+] Testing listing-resources-with-kubectl-get
+ kubectl get services -A
NAMESPACE     NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
default       kubernetes   ClusterIP   172.16.3.1            443/TCP         8m47s
kube-system   kube-dns     ClusterIP   172.16.3.10           53/UDP,53/TCP   6m26s
+ kubectl get deployments.apps -A
NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   calico-kube-controllers   1/1     1            1           6m42s
kube-system   kube-dns                  1/1     1            1           6m26s
+ kubectl get secrets
No resources found in default namespace.
[+] Test complete
[+] Cleaning cluster
[+] Preparing kubectl-top
[+] Setup complete
[+] Testing kubectl-top
+ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
+ kubectl -n kube-system wait --for condition=Available --timeout 100s deployment.apps/metrics-server
deployment.apps/metrics-server condition met
+ kubectl -n kube-system wait --for condition=Available --timeout 30s apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io condition met
+ kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
node-1   183m         9%     1196Mi          31%       
node-2   340m         17%    1237Mi          32%       
+ kubectl top pods --all-namespaces
NAMESPACE     NAME                                       CPU(cores)   MEMORY(bytes)   
kube-system   calico-kube-controllers-5549cfb998-dtvdp   4m           14Mi            
kube-system   calico-node-8ww29                          31m          116Mi           
kube-system   calico-node-wn7bj                          36m          114Mi           
kube-system   kube-dns-c94b7f88d-hrhq5                   3m           18Mi            
kube-system   metrics-server-6db4d75b97-v79lf            125m         12Mi            
[+] Test complete
[+] Cleaning cluster
[+] Preparing rolling-updates-and-rollbacks
[+] Setup complete
[+] Testing rolling-updates-and-rollbacks
[+] Test complete
[+] Cleaning cluster
[+] Preparing advanced-logging
[+] Setup complete
[+] Testing advanced-logging
+ kubectl apply -f ../yaml/counter-pod.yaml
pod/counter created
+ kubectl wait --for condition=Ready --timeout 30s pod/counter
pod/counter condition met
+ kubectl apply -f ../yaml/two-pack.yaml
deployment.apps/two-pack created
+ kubectl wait --for condition=Available --timeout 30s deployment.apps/two-pack
deployment.apps/two-pack condition met
+ kubectl apply -f ../yaml/nginx-rsyslog-pod.yaml
pod/nginx-rsyslog-pod created
+ kubectl wait --for condition=Ready --timeout 30s pod/nginx-rsyslog-pod
pod/nginx-rsyslog-pod condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing examining-resources-with-kubectl-describe
[+] Setup complete
[+] Testing examining-resources-with-kubectl-describe
+ kubectl run --port=8888 --image=registry.gitlab.com/alta3/webby webweb
pod/webweb created
+ kubectl wait --for condition=Ready --timeout 60s pod/webweb
pod/webweb condition met
+ kubectl delete pod webweb --now
pod "webweb" deleted
+ kubectl apply -f ../yaml/webweb-deploy.yaml
deployment.apps/webweb created
+ kubectl wait --for condition=Available --timeout 60s deployment.apps/webweb
deployment.apps/webweb condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing patching
[+] Setup complete
[+] Testing patching
[+] Test complete
[+] Cleaning cluster
[+] Preparing install-coredns-lab
[+] Setup complete
[+] Testing install-coredns-lab
[+] Test complete
[+] Cleaning cluster
[+] Preparing admission-controller
[+] Setup complete
[+] Testing admission-controller
+ kubectl run no-lr --image=nginx:1.19.6
pod/no-lr created
+ kubectl wait --for condition=Ready --timeout 30s pod/no-lr
pod/no-lr condition met
+ kubectl apply -f ../yaml/lim-ran.yml
limitrange/mem-limit-range created
+ kubectl get limitrange
NAME              CREATED AT
mem-limit-range   2024-03-15T18:56:14Z
[+] Test complete
[+] Cleaning cluster
[+] Preparing livenessprobes-and-readinessprobes
[+] Setup complete
[+] Testing livenessprobes-and-readinessprobes
+ kubectl apply -f ../yaml/badpod.yaml
pod/badpod created
+ kubectl wait --for condition=Ready --timeout 30s pod/badpod
pod/badpod condition met
+ kubectl apply -f ../yaml/sise-lp.yaml
pod/sise-lp created
+ kubectl wait --for condition=Ready --timeout 30s pod/sise-lp
pod/sise-lp condition met
+ kubectl apply -f ../yaml/sise-rp.yaml
pod/sise-rp created
+ kubectl wait --for condition=Ready --timeout 30s pod/sise-rp
pod/sise-rp condition met
+ kubectl get pods
NAME      READY   STATUS    RESTARTS   AGE
badpod    1/1     Running   0          51s
sise-lp   1/1     Running   0          32s
sise-rp   1/1     Running   0          13s
[+] Test complete
[+] Cleaning cluster
[+] Preparing autoscaling
[+] Setup complete
[+] Testing autoscaling
[+] Test complete
[+] Cleaning cluster
[+] Preparing taints-and-tolerations
[+] Setup complete
[+] Testing taints-and-tolerations
+ kubectl apply -f ../yaml/tnt01.yaml
deployment.apps/nginx created
+ kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx
deployment.apps/nginx condition met
+ kubectl delete -f ../yaml/tnt01.yaml
deployment.apps "nginx" deleted
+ kubectl taint nodes node-1 trying_taints=yessir:NoSchedule
node/node-1 tainted
+ kubectl apply -f ../yaml/tnt01.yaml
deployment.apps/nginx created
+ kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx
deployment.apps/nginx condition met
+ kubectl apply -f ../yaml/tnt02.yaml
deployment.apps/nginx-tolerated created
+ kubectl wait --for condition=Available --timeout 30s deployment.apps/nginx-tolerated
deployment.apps/nginx-tolerated condition met
+ kubectl get pods -o wide
NAME                              READY   STATUS        RESTARTS     AGE   IP               NODE     NOMINATED NODE   READINESS GATES
badpod                            1/1     Terminating   1 (5s ago)   66s   192.168.84.140   node-1              
nginx-575b786c99-4t787            1/1     Running       0            8s    192.168.247.28   node-2              
nginx-575b786c99-5f6bh            1/1     Running       0            8s    192.168.247.29   node-2              
nginx-575b786c99-6tpfd            1/1     Running       0            8s    192.168.247.27   node-2              
nginx-575b786c99-9b266            1/1     Running       0            8s    192.168.247.31   node-2              
nginx-575b786c99-hnj2b            1/1     Running       0            8s    192.168.247.24   node-2              
nginx-575b786c99-jpdp8            1/1     Running       0            8s    192.168.247.30   node-2              
nginx-575b786c99-k8vdx            1/1     Running       0            8s    192.168.247.22   node-2              
nginx-575b786c99-ms6p6            1/1     Running       0            8s    192.168.247.26   node-2              
nginx-575b786c99-rhhw8            1/1     Running       0            8s    192.168.247.23   node-2              
nginx-575b786c99-s46nr            1/1     Running       0            8s    192.168.247.25   node-2              
nginx-tolerated-54c4f8dcd-2rbf2   1/1     Running       0            2s    192.168.84.149   node-1              
nginx-tolerated-54c4f8dcd-2v562   1/1     Running       0            2s    192.168.247.35   node-2              
nginx-tolerated-54c4f8dcd-72c68   1/1     Running       0            2s    192.168.84.146   node-1              
nginx-tolerated-54c4f8dcd-7cjmn   1/1     Running       0            2s    192.168.84.147   node-1              
nginx-tolerated-54c4f8dcd-fcrzc   1/1     Running       0            2s    192.168.84.151   node-1              
nginx-tolerated-54c4f8dcd-fjw2s   1/1     Running       0            2s    192.168.247.33   node-2              
nginx-tolerated-54c4f8dcd-src6q   1/1     Running       0            2s    192.168.84.150   node-1              
nginx-tolerated-54c4f8dcd-v5jj7   1/1     Running       0            2s    192.168.84.148   node-1              
nginx-tolerated-54c4f8dcd-wb8j6   1/1     Running       0            2s    192.168.247.32   node-2              
nginx-tolerated-54c4f8dcd-zckq6   1/1     Running       0            2s    192.168.247.34   node-2              
sise-lp                           1/1     Terminating   0            47s   192.168.247.15   node-2              
sise-rp                           1/1     Terminating   0            28s   192.168.247.16   node-2              
[+] Test complete
[+] Cleaning cluster
[+] Preparing create-and-configure-basic-pods
[+] Setup complete
[+] Testing create-and-configure-basic-pods
+ kubectl apply -f ../yaml/simpleservice.yaml
pod/simpleservice created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/simpleservice.yaml
pod/simpleservice condition met
+ kubectl apply -f ../yaml/webby-pod01.yaml
pod/webservice01 created
+ kubectl wait --for condition=Ready --timeout 60s -f ../yaml/webby-pod01.yaml
pod/webservice01 condition met
[+] Test complete
[+] Cleaning cluster
[+] Preparing horizontal-scaling-with-kubectl-scale
[+] Setup complete
[+] Testing horizontal-scaling-with-kubectl-scale
deployment.apps/webservice created
deployment.apps/webservice condition met
deployment.apps/webservice scaled
deployment.apps/webservice condition met
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
webservice   1/1     1            1           3s
[+] Test complete
[+] Cleaning cluster
[+] Preparing init-containers
[+] Setup complete
[+] Testing init-containers
+ kubectl apply -f ../yaml/init-cont-pod.yaml
pod/myapp-pod created
+ kubectl apply -f ../yaml/myservice.yaml
service/myservice created
+ kubectl apply -f ../yaml/mydb.yaml
service/mydb created
+ kubectl wait --for condition=Ready --timeout 30s -f ../yaml/init-cont-pod.yaml
pod/myapp-pod condition met
+ kubectl get pods
NAME            READY   STATUS        RESTARTS   AGE
myapp-pod       1/1     Running       0          21s
simpleservice   1/1     Terminating   0          33s
[+] Test complete
[+] Cleaning cluster
[+] Preparing fluentd
[+] Setup complete
[+] Testing fluentd
+ kubectl apply -f ../yaml/fluentd-conf.yaml
configmap/fluentd-config created
+ kubectl apply -f ../yaml/fluentd-pod.yaml
pod/logger created
+ kubectl wait --for condition=Ready --timeout 30s pod/logger
pod/logger condition met
+ kubectl apply -f -
++ hostname -f
+ BCHD_IP=bchd.194f4a4f-7b7c-42bc-947e-12e3ded1de4d
+ j2 ../yaml/http_fluentd_config.yaml
configmap/fluentd-config configured
+ kubectl apply -f ../yaml/http_fluentd.yaml
pod/http-logger created
+ kubectl wait --for condition=Ready --timeout 30s pod/http-logger
pod/http-logger condition met
+ kubectl get pods
NAME          READY   STATUS        RESTARTS   AGE
http-logger   2/2     Running       0          5s
logger        2/2     Running       0          21s
myapp-pod     1/1     Terminating   0          45s
[+] Test complete
[+] Cleaning cluster
[+] Preparing persistent-configuration-with-configmaps
[+] Setup complete
[+] Testing persistent-configuration-with-configmaps
configmap/nginx-base-conf created
configmap/index-html-zork created
configmap/nineteen-eighty-four created
pod/nginx-configured created
pod/nginx-configured condition met
NAME               READY   STATUS    RESTARTS   AGE
nginx-configured   1/1     Running   0          2s
[+] Test complete
[+] Cleaning cluster
[+] Preparing configure-coredns-lab
[+] Setup complete
[+] Testing configure-coredns-lab
[+] Test complete
[+] Cleaning cluster
[+] Preparing RBAC-authentication-authorization-lab
[+] Setup complete
[+] Testing RBAC-authentication-authorization-lab
+ kubectl apply -f ../yaml/t3-support.yaml
role.rbac.authorization.k8s.io/t3-support created
+ kubectl apply -f ../yaml/alice-csr.yaml
certificatesigningrequest.certificates.k8s.io/alice created
+ kubectl certificate approve alice
certificatesigningrequest.certificates.k8s.io/alice approved
+ kubectl apply -f ../yaml/t3-support-binding.yaml
rolebinding.rbac.authorization.k8s.io/t3-support created
[+] Test complete