kubermatic / kubeone

Kubermatic KubeOne automate cluster operations on all your cloud, on-prem, edge, and IoT environments.
https://kubeone.io
Apache License 2.0
1.38k stars 234 forks source link

[aws] Error in task "Building Kubernetes clientset..." #2904

Closed ghost closed 4 months ago

ghost commented 1 year ago

What happened?

While trying to set up my first cluster an error occurred:

INFO[17:36:51 CEST] Running kubeadm preflight checks...          
INFO[17:36:51 CEST]     preflight...                                 node=172.31.179.74
INFO[17:36:51 CEST]     preflight...                                 node=172.31.177.218
INFO[17:36:51 CEST]     preflight...                                 node=172.31.178.187
INFO[17:37:09 CEST] Pre-pull images                               node=172.31.179.74
INFO[17:37:09 CEST] Pre-pull images                               node=172.31.178.187
INFO[17:37:09 CEST] Pre-pull images                               node=172.31.177.218
INFO[17:37:11 CEST] Configuring certs and etcd on control plane node... 
INFO[17:37:11 CEST] Ensuring Certificates...                      node=172.31.177.218
INFO[17:37:13 CEST] Downloading PKI...                           
INFO[17:37:13 CEST] Creating local backup...                      node=172.31.177.218
INFO[17:37:13 CEST] Uploading PKI...                             
INFO[17:37:15 CEST] Configuring certs and etcd on consecutive control plane node... 
INFO[17:37:15 CEST] Ensuring Certificates...                      node=172.31.179.74
INFO[17:37:15 CEST] Ensuring Certificates...                      node=172.31.178.187
INFO[17:37:17 CEST] Initializing Kubernetes on leader...         
INFO[17:37:17 CEST] Running kubeadm...                            node=172.31.177.218
INFO[17:38:08 CEST] Building Kubernetes clientset...             
WARN[17:38:08 CEST] Task failed, error was: kubernetes: building dynamic kubernetes client
Get "https://kubeone-cluster-api-lb-153808776.eu-central-1.elb.amazonaws.com:6443/api?timeout=32s": proxyconnect tcp: ssh: tunneling
connection to: 127.0.0.1:8118
ssh: rejected: connect failed (Connection refused) 
WARN[17:38:18 CEST] Retrying task...                             
INFO[17:38:18 CEST] Building Kubernetes clientset...             
WARN[17:38:19 CEST] Task failed, error was: kubernetes: building dynamic kubernetes client
Get "https://kubeone-cluster-api-lb-153808776.eu-central-1.elb.amazonaws.com:6443/api?timeout=32s": proxyconnect tcp: ssh: tunneling
connection to: 127.0.0.1:8118
ssh: rejected: connect failed (Connection refused)

Expected behavior

The cluster gets initialized properly.

How to reproduce the issue?

  1. follow the documentation at https://docs.kubermatic.com/kubeone/v1.7/tutorials/creating-clusters/

What KubeOne version are you using?

```console $ kubeone version { "kubeone": { "major": "1", "minor": "7", "gitVersion": "1.7.0", "gitCommit": "1195366fd0cf11f314d194a3b29b6a782afde9a8", "gitTreeState": "", "buildDate": "2023-09-08T14:02:33Z", "goVersion": "go1.20.5", "compiler": "gc", "platform": "linux/amd64" }, "machine_controller": { "major": "1", "minor": "57", "gitVersion": "v1.57.3", "gitCommit": "", "gitTreeState": "", "buildDate": "", "goVersion": "", "compiler": "", "platform": "linux/amd64" } } ```

Provide your KubeOneCluster manifest here (if applicable)

```yaml apiVersion: kubeone.k8c.io/v1beta2 kind: KubeOneCluster versions: kubernetes: "1.27.5" cloudProvider: aws: {} external: true ```

What cloud provider are you running on?

AWS

What operating system are you running in your cluster?

Amazon Linux

Additional information

In the AWS console I can see that the load balancer it tries to access is taken offline, since it is unhealthy due to 2 nodes not being in-service: image

xmudrii commented 1 year ago

If you don't have anything in that cluster, try resetting it (effectively reverting everything that KubeOne did, so you will lose all data in the cluster if there's any), and then run kubeone apply with the verbose flag. That should provide some more details about what's going on.

For example:

kubeone reset -t . --destroy-workers=false
kubeone apply -t . -v

(-v flag is verbose)

ghost commented 1 year ago

OK, I've just reverted everything with terraform destroy to avoid anything getting into the way from previous runs.

Also checked that everything is indeed cleaned up (except for default VPC stuff).

And here is the verbose log:

``` INFO[18:16:46 CEST] Determine hostname... DEBU[18:16:47 CEST] Hostname is already set to "ip-172-31-147-179.eu-central-1.compute.internal" node=172.31.147.179 DEBU[18:16:48 CEST] Hostname is already set to "ip-172-31-145-202.eu-central-1.compute.internal" node=172.31.145.202 DEBU[18:16:48 CEST] Hostname is already set to "ip-172-31-146-232.eu-central-1.compute.internal" node=172.31.146.232 INFO[18:16:48 CEST] Determine operating system... DEBU[18:16:48 CEST] Operating system is already set to "amzn" node=172.31.147.179 DEBU[18:16:48 CEST] Operating system is already set to "amzn" node=172.31.145.202 DEBU[18:16:48 CEST] Operating system is already set to "amzn" node=172.31.146.232 INFO[18:16:48 CEST] Running host probes... Host: "ip-172-31-145-202.eu-central-1.compute.internal" Host initialized: no containerd healthy: no (unknown) Kubelet healthy: no (unknown) containerd is installed: no containerd is running: no containerd is active: no containerd is restarting: no kubelet is installed: no kubelet is running: no kubelet is active: no kubelet is restarting: no Host: "ip-172-31-146-232.eu-central-1.compute.internal" Host initialized: no containerd healthy: no (unknown) Kubelet healthy: no (unknown) containerd is installed: no containerd is running: no containerd is active: no containerd is restarting: no kubelet is installed: no kubelet is running: no kubelet is active: no kubelet is restarting: no Host: "ip-172-31-147-179.eu-central-1.compute.internal" Host initialized: no containerd healthy: no (unknown) Kubelet healthy: no (unknown) containerd is installed: no containerd is running: no containerd is active: no containerd is restarting: no kubelet is installed: no kubelet is running: no kubelet is active: no kubelet is restarting: no The following actions will be taken: Run with --verbose flag for more information. + initialize control plane node "ip-172-31-145-202.eu-central-1.compute.internal" (172.31.145.202) using 1.27.5 + join control plane node "ip-172-31-146-232.eu-central-1.compute.internal" (172.31.146.232) using 1.27.5 + join control plane node "ip-172-31-147-179.eu-central-1.compute.internal" (172.31.147.179) using 1.27.5 + ensure machinedeployment "kubeone-cluster-eu-central-1b" with 1 replica(s) exists + ensure machinedeployment "kubeone-cluster-eu-central-1c" with 1 replica(s) exists + ensure machinedeployment "kubeone-cluster-eu-central-1a" with 1 replica(s) exists Do you want to proceed (yes/no): yes INFO[18:16:51 CEST] Determine hostname... DEBU[18:16:51 CEST] Hostname is already set to "ip-172-31-147-179.eu-central-1.compute.internal" node=172.31.147.179 DEBU[18:16:51 CEST] Hostname is already set to "ip-172-31-145-202.eu-central-1.compute.internal" node=172.31.145.202 DEBU[18:16:51 CEST] Hostname is already set to "ip-172-31-146-232.eu-central-1.compute.internal" node=172.31.146.232 INFO[18:16:51 CEST] Determine operating system... DEBU[18:16:51 CEST] Operating system is already set to "amzn" node=172.31.147.179 DEBU[18:16:51 CEST] Operating system is already set to "amzn" node=172.31.146.232 DEBU[18:16:51 CEST] Operating system is already set to "amzn" node=172.31.145.202 INFO[18:16:51 CEST] Running host probes... INFO[18:16:52 CEST] Installing prerequisites... INFO[18:16:52 CEST] Creating environment file... node=172.31.147.179 os=amzn INFO[18:16:52 CEST] Creating environment file... node=172.31.146.232 os=amzn INFO[18:16:52 CEST] Creating environment file... node=172.31.145.202 os=amzn [172.31.147.179] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + sudo mkdir -p /etc/kubeone [172.31.145.202] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + sudo mkdir -p /etc/kubeone [172.31.146.232] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + sudo mkdir -p /etc/kubeone [172.31.147.179] + cat [172.31.147.179] + sudo tee /etc/kubeone/proxy-env [172.31.147.179] + envtmp=/tmp/k1-etc-environment [172.31.147.179] + sudo rm -f /tmp/k1-etc-environment [172.31.147.179] [172.31.145.202] + cat [172.31.147.179] + grep -v '#kubeone$' /etc/environment [172.31.147.179] + true [172.31.147.179] + set +o pipefail [172.31.147.179] + grep = /etc/kubeone/proxy-env [172.31.147.179] + sed 's/$/#kubeone/' [172.31.147.179] + sudo tee /etc/environment [172.31.145.202] + sudo tee /etc/kubeone/proxy-env INFO[18:16:52 CEST] Configuring proxy... node=172.31.147.179 os=amzn INFO[18:16:52 CEST] Installing kubeadm... node=172.31.147.179 os=amzn [172.31.145.202] + envtmp=/tmp/k1-etc-environment [172.31.145.202] + sudo rm -f /tmp/k1-etc-environment [172.31.145.202] [172.31.145.202] + grep -v '#kubeone$' /etc/environment [172.31.146.232] + cat [172.31.145.202] + true [172.31.145.202] + set +o pipefail [172.31.145.202] + grep = /etc/kubeone/proxy-env [172.31.145.202] + sed 's/$/#kubeone/' [172.31.145.202] + sudo tee /etc/environment [172.31.146.232] + sudo tee /etc/kubeone/proxy-env [172.31.146.232] + envtmp=/tmp/k1-etc-environment [172.31.146.232] + sudo rm -f /tmp/k1-etc-environment [172.31.146.232] [172.31.146.232] + grep -v '#kubeone$' /etc/environment [172.31.146.232] + true [172.31.146.232] + set +o pipefail [172.31.146.232] + grep = /etc/kubeone/proxy-env [172.31.146.232] + sed 's/$/#kubeone/' [172.31.146.232] + sudo tee /etc/environment INFO[18:16:52 CEST] Configuring proxy... node=172.31.145.202 os=amzn INFO[18:16:52 CEST] Installing kubeadm... node=172.31.145.202 os=amzn [172.31.147.179] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + sudo swapoff -a INFO[18:16:52 CEST] Configuring proxy... node=172.31.146.232 os=amzn INFO[18:16:52 CEST] Installing kubeadm... node=172.31.146.232 os=amzn [172.31.147.179] + sudo sed -i '/.*swap.*/d' /etc/fstab [172.31.147.179] + sudo setenforce 0 [172.31.145.202] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + sudo swapoff -a [172.31.146.232] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + sudo swapoff -a [172.31.145.202] + sudo sed -i '/.*swap.*/d' /etc/fstab [172.31.147.179] setenforce: SELinux is disabled [172.31.146.232] + sudo sed -i '/.*swap.*/d' /etc/fstab [172.31.145.202] + sudo setenforce 0 [172.31.147.179] + true [172.31.147.179] + '[' -f /etc/selinux/config ']' [172.31.147.179] + sudo sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config [172.31.146.232] + sudo setenforce 0 [172.31.147.179] + sudo systemctl disable --now firewalld [172.31.145.202] setenforce: SELinux is disabled [172.31.146.232] setenforce: SELinux is disabled [172.31.146.232] + true [172.31.146.232] + '[' -f /etc/selinux/config ']' [172.31.146.232] + sudo sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config [172.31.147.179] Failed to execute operation: No such file or directory [172.31.147.179] + true [172.31.147.179] + source /etc/kubeone/proxy-env [172.31.147.179] + cat [172.31.147.179] + sudo tee /etc/modules-load.d/containerd.conf [172.31.145.202] + true [172.31.145.202] + '[' -f /etc/selinux/config ']' [172.31.145.202] + sudo sed -i s/SELINUX=enforcing/SELINUX=permissive/g /etc/selinux/config [172.31.146.232] + sudo systemctl disable --now firewalld [172.31.147.179] + sudo modprobe overlay [172.31.147.179] overlay [172.31.147.179] br_netfilter [172.31.147.179] ip_tables [172.31.145.202] + sudo systemctl disable --now firewalld [172.31.146.232] Failed to execute operation: No such file or directory [172.31.146.232] + true [172.31.146.232] + source /etc/kubeone/proxy-env [172.31.146.232] + cat [172.31.146.232] + sudo tee /etc/modules-load.d/containerd.conf [172.31.145.202] Failed to execute operation: No such file or directory [172.31.145.202] + true [172.31.145.202] + source /etc/kubeone/proxy-env [172.31.145.202] + sudo tee /etc/modules-load.d/containerd.conf [172.31.145.202] + cat [172.31.147.179] + sudo modprobe br_netfilter [172.31.146.232] + sudo modprobe overlay [172.31.146.232] overlay [172.31.146.232] br_netfilter [172.31.146.232] ip_tables [172.31.145.202] overlay [172.31.145.202] br_netfilter [172.31.145.202] ip_tables [172.31.145.202] + sudo modprobe overlay [172.31.145.202] + sudo modprobe br_netfilter [172.31.146.232] + sudo modprobe br_netfilter [172.31.147.179] + sudo modprobe ip_tables [172.31.147.179] + modinfo nf_conntrack_ipv4 [172.31.147.179] + sudo modprobe nf_conntrack_ipv4 [172.31.147.179] + sudo mkdir -p /etc/sysctl.d [172.31.146.232] + sudo modprobe ip_tables [172.31.147.179] + cat [172.31.146.232] + modinfo nf_conntrack_ipv4 [172.31.147.179] fs.inotify.max_user_watches = 1048576 [172.31.147.179] kernel.panic = 10 [172.31.147.179] kernel.panic_on_oops = 1 [172.31.147.179] net.bridge.bridge-nf-call-ip6tables = 1 [172.31.147.179] net.bridge.bridge-nf-call-iptables = 1 [172.31.147.179] net.ipv4.ip_forward = 1 [172.31.147.179] net.netfilter.nf_conntrack_max = 1000000 [172.31.147.179] vm.overcommit_memory = 1 [172.31.147.179] + sudo tee /etc/sysctl.d/k8s.conf [172.31.147.179] + sudo sysctl --system [172.31.145.202] + sudo modprobe ip_tables [172.31.147.179] + sudo mkdir -p /etc/systemd/journald.conf.d [172.31.147.179] * Applying /etc/sysctl.d/00-defaults.conf ... [172.31.147.179] kernel.printk = 8 4 1 7 [172.31.147.179] kernel.panic = 30 [172.31.147.179] net.ipv4.neigh.default.gc_thresh1 = 0 [172.31.147.179] net.ipv6.neigh.default.gc_thresh1 = 0 [172.31.147.179] net.ipv4.neigh.default.gc_thresh2 = 15360 [172.31.147.179] net.ipv6.neigh.default.gc_thresh2 = 15360 [172.31.147.179] net.ipv4.neigh.default.gc_thresh3 = 16384 [172.31.147.179] net.ipv6.neigh.default.gc_thresh3 = 16384 [172.31.147.179] net.ipv4.tcp_wmem = 4096 20480 4194304 [172.31.147.179] net.ipv4.ip_default_ttl = 255 [172.31.147.179] * Applying /usr/lib/sysctl.d/00-system.conf ... [172.31.147.179] net.bridge.bridge-nf-call-ip6tables = 0 [172.31.147.179] net.bridge.bridge-nf-call-iptables = 0 [172.31.147.179] net.bridge.bridge-nf-call-arptables = 0 [172.31.147.179] * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ... [172.31.147.179] kernel.yama.ptrace_scope = 0 [172.31.147.179] * Applying /usr/lib/sysctl.d/50-default.conf ... [172.31.147.179] kernel.sysrq = 16 [172.31.147.179] kernel.core_uses_pid = 1 [172.31.147.179] kernel.kptr_restrict = 1 [172.31.147.179] net.ipv4.conf.default.rp_filter = 1 [172.31.147.179] net.ipv4.conf.all.rp_filter = 1 [172.31.147.179] net.ipv4.conf.default.accept_source_route = 0 [172.31.147.179] net.ipv4.conf.all.accept_source_route = 0 [172.31.147.179] net.ipv4.conf.default.promote_secondaries = 1 [172.31.147.179] net.ipv4.conf.all.promote_secondaries = 1 [172.31.147.179] fs.protected_hardlinks = 1 [172.31.147.179] fs.protected_symlinks = 1 [172.31.147.179] * Applying /etc/sysctl.d/99-amazon.conf ... [172.31.147.179] kernel.sched_autogroup_enabled = 0 [172.31.147.179] * Applying /usr/lib/sysctl.d/99-ipv6.conf ... [172.31.147.179] net.ipv6.conf.all.accept_dad = 0 [172.31.147.179] net.ipv6.conf.default.accept_dad = 0 [172.31.147.179] net.ipv6.conf.eth0.accept_dad = 0 [172.31.147.179] * Applying /etc/sysctl.d/99-sysctl.conf ... [172.31.147.179] * Applying /etc/sysctl.d/k8s.conf ... [172.31.147.179] fs.inotify.max_user_watches = 1048576 [172.31.147.179] kernel.panic = 10 [172.31.147.179] kernel.panic_on_oops = 1 [172.31.147.179] net.bridge.bridge-nf-call-ip6tables = 1 [172.31.147.179] net.bridge.bridge-nf-call-iptables = 1 [172.31.147.179] net.ipv4.ip_forward = 1 [172.31.147.179] net.netfilter.nf_conntrack_max = 1000000 [172.31.147.179] vm.overcommit_memory = 1 [172.31.147.179] * Applying /etc/sysctl.conf ... [172.31.145.202] + modinfo nf_conntrack_ipv4 [172.31.147.179] + cat [172.31.146.232] + sudo modprobe nf_conntrack_ipv4 [172.31.147.179] + sudo tee /etc/systemd/journald.conf.d/max_disk_use.conf [172.31.147.179] + sudo systemctl force-reload systemd-journald [172.31.147.179] [Journal] [172.31.147.179] SystemMaxUse=5G [172.31.145.202] + sudo modprobe nf_conntrack_ipv4 [172.31.147.179] + yum_proxy= [172.31.147.179] + grep -v '#kubeone' /etc/yum.conf [172.31.147.179] + echo -n '' [172.31.147.179] + sudo mv /tmp/yum.conf /etc/yum.conf [172.31.146.232] + sudo mkdir -p /etc/sysctl.d [172.31.145.202] + sudo mkdir -p /etc/sysctl.d [172.31.147.179] + repo_migration_needed=false [172.31.147.179] + sudo grep -q packages.cloud.google.com /etc/yum.repos.d/kubernetes.repo [172.31.146.232] + cat [172.31.146.232] + sudo tee /etc/sysctl.d/k8s.conf [172.31.145.202] + sudo tee /etc/sysctl.d/k8s.conf [172.31.145.202] + cat [172.31.147.179] grep: /etc/yum.repos.d/kubernetes.repo: No such file or directory [172.31.147.179] + cat [172.31.147.179] + sudo tee /etc/yum.repos.d/kubernetes.repo [172.31.146.232] + sudo sysctl --system [172.31.146.232] fs.inotify.max_user_watches = 1048576 [172.31.146.232] kernel.panic = 10 [172.31.146.232] kernel.panic_on_oops = 1 [172.31.146.232] net.bridge.bridge-nf-call-ip6tables = 1 [172.31.146.232] net.bridge.bridge-nf-call-iptables = 1 [172.31.146.232] net.ipv4.ip_forward = 1 [172.31.146.232] net.netfilter.nf_conntrack_max = 1000000 [172.31.146.232] vm.overcommit_memory = 1 [172.31.145.202] + sudo sysctl --system [172.31.145.202] fs.inotify.max_user_watches = 1048576 [172.31.145.202] kernel.panic = 10 [172.31.145.202] kernel.panic_on_oops = 1 [172.31.145.202] net.bridge.bridge-nf-call-ip6tables = 1 [172.31.145.202] net.bridge.bridge-nf-call-iptables = 1 [172.31.145.202] net.ipv4.ip_forward = 1 [172.31.145.202] net.netfilter.nf_conntrack_max = 1000000 [172.31.145.202] vm.overcommit_memory = 1 [172.31.147.179] + [[ false == \t\r\u\e ]] [172.31.147.179] + sudo yum install -y yum-plugin-versionlock device-mapper-persistent-data lvm2 conntrack-tools ebtables socat iproute-tc rsync [172.31.147.179] [kubernetes] [172.31.147.179] name=Kubernetes [172.31.147.179] baseurl=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/ [172.31.147.179] enabled=1 [172.31.147.179] gpgcheck=1 [172.31.147.179] gpgkey=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key [172.31.146.232] + sudo mkdir -p /etc/systemd/journald.conf.d [172.31.146.232] * Applying /etc/sysctl.d/00-defaults.conf ... [172.31.146.232] kernel.printk = 8 4 1 7 [172.31.146.232] kernel.panic = 30 [172.31.146.232] net.ipv4.neigh.default.gc_thresh1 = 0 [172.31.146.232] net.ipv6.neigh.default.gc_thresh1 = 0 [172.31.146.232] net.ipv4.neigh.default.gc_thresh2 = 15360 [172.31.146.232] net.ipv6.neigh.default.gc_thresh2 = 15360 [172.31.146.232] net.ipv4.neigh.default.gc_thresh3 = 16384 [172.31.146.232] net.ipv6.neigh.default.gc_thresh3 = 16384 [172.31.146.232] net.ipv4.tcp_wmem = 4096 20480 4194304 [172.31.146.232] net.ipv4.ip_default_ttl = 255 [172.31.146.232] * Applying /usr/lib/sysctl.d/00-system.conf ... [172.31.146.232] net.bridge.bridge-nf-call-ip6tables = 0 [172.31.146.232] net.bridge.bridge-nf-call-iptables = 0 [172.31.146.232] net.bridge.bridge-nf-call-arptables = 0 [172.31.146.232] * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ... [172.31.146.232] kernel.yama.ptrace_scope = 0 [172.31.146.232] * Applying /usr/lib/sysctl.d/50-default.conf ... [172.31.146.232] kernel.sysrq = 16 [172.31.146.232] kernel.core_uses_pid = 1 [172.31.146.232] kernel.kptr_restrict = 1 [172.31.146.232] net.ipv4.conf.default.rp_filter = 1 [172.31.146.232] net.ipv4.conf.all.rp_filter = 1 [172.31.146.232] net.ipv4.conf.default.accept_source_route = 0 [172.31.146.232] net.ipv4.conf.all.accept_source_route = 0 [172.31.146.232] net.ipv4.conf.default.promote_secondaries = 1 [172.31.146.232] net.ipv4.conf.all.promote_secondaries = 1 [172.31.146.232] fs.protected_hardlinks = 1 [172.31.146.232] fs.protected_symlinks = 1 [172.31.146.232] * Applying /etc/sysctl.d/99-amazon.conf ... [172.31.146.232] kernel.sched_autogroup_enabled = 0 [172.31.146.232] * Applying /usr/lib/sysctl.d/99-ipv6.conf ... [172.31.146.232] net.ipv6.conf.all.accept_dad = 0 [172.31.146.232] net.ipv6.conf.default.accept_dad = 0 [172.31.146.232] net.ipv6.conf.eth0.accept_dad = 0 [172.31.146.232] * Applying /etc/sysctl.d/99-sysctl.conf ... [172.31.146.232] * Applying /etc/sysctl.d/k8s.conf ... [172.31.146.232] fs.inotify.max_user_watches = 1048576 [172.31.146.232] kernel.panic = 10 [172.31.146.232] kernel.panic_on_oops = 1 [172.31.146.232] net.bridge.bridge-nf-call-ip6tables = 1 [172.31.146.232] net.bridge.bridge-nf-call-iptables = 1 [172.31.146.232] net.ipv4.ip_forward = 1 [172.31.146.232] net.netfilter.nf_conntrack_max = 1000000 [172.31.146.232] vm.overcommit_memory = 1 [172.31.146.232] * Applying /etc/sysctl.conf ... [172.31.145.202] + sudo mkdir -p /etc/systemd/journald.conf.d [172.31.145.202] * Applying /etc/sysctl.d/00-defaults.conf ... [172.31.145.202] kernel.printk = 8 4 1 7 [172.31.145.202] kernel.panic = 30 [172.31.145.202] net.ipv4.neigh.default.gc_thresh1 = 0 [172.31.145.202] net.ipv6.neigh.default.gc_thresh1 = 0 [172.31.145.202] net.ipv4.neigh.default.gc_thresh2 = 15360 [172.31.145.202] net.ipv6.neigh.default.gc_thresh2 = 15360 [172.31.145.202] net.ipv4.neigh.default.gc_thresh3 = 16384 [172.31.145.202] net.ipv6.neigh.default.gc_thresh3 = 16384 [172.31.145.202] net.ipv4.tcp_wmem = 4096 20480 4194304 [172.31.145.202] net.ipv4.ip_default_ttl = 255 [172.31.145.202] * Applying /usr/lib/sysctl.d/00-system.conf ... [172.31.145.202] net.bridge.bridge-nf-call-ip6tables = 0 [172.31.145.202] net.bridge.bridge-nf-call-iptables = 0 [172.31.145.202] net.bridge.bridge-nf-call-arptables = 0 [172.31.145.202] * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ... [172.31.145.202] kernel.yama.ptrace_scope = 0 [172.31.145.202] * Applying /usr/lib/sysctl.d/50-default.conf ... [172.31.145.202] kernel.sysrq = 16 [172.31.145.202] kernel.core_uses_pid = 1 [172.31.145.202] kernel.kptr_restrict = 1 [172.31.145.202] net.ipv4.conf.default.rp_filter = 1 [172.31.145.202] net.ipv4.conf.all.rp_filter = 1 [172.31.145.202] net.ipv4.conf.default.accept_source_route = 0 [172.31.145.202] net.ipv4.conf.all.accept_source_route = 0 [172.31.145.202] net.ipv4.conf.default.promote_secondaries = 1 [172.31.145.202] net.ipv4.conf.all.promote_secondaries = 1 [172.31.145.202] fs.protected_hardlinks = 1 [172.31.145.202] fs.protected_symlinks = 1 [172.31.145.202] * Applying /etc/sysctl.d/99-amazon.conf ... [172.31.145.202] kernel.sched_autogroup_enabled = 0 [172.31.145.202] * Applying /usr/lib/sysctl.d/99-ipv6.conf ... [172.31.145.202] net.ipv6.conf.all.accept_dad = 0 [172.31.145.202] net.ipv6.conf.default.accept_dad = 0 [172.31.145.202] net.ipv6.conf.eth0.accept_dad = 0 [172.31.145.202] * Applying /etc/sysctl.d/99-sysctl.conf ... [172.31.145.202] * Applying /etc/sysctl.d/k8s.conf ... [172.31.145.202] fs.inotify.max_user_watches = 1048576 [172.31.145.202] kernel.panic = 10 [172.31.145.202] kernel.panic_on_oops = 1 [172.31.145.202] net.bridge.bridge-nf-call-ip6tables = 1 [172.31.145.202] net.bridge.bridge-nf-call-iptables = 1 [172.31.145.202] net.ipv4.ip_forward = 1 [172.31.145.202] net.netfilter.nf_conntrack_max = 1000000 [172.31.145.202] vm.overcommit_memory = 1 [172.31.145.202] * Applying /etc/sysctl.conf ... [172.31.146.232] + sudo tee /etc/systemd/journald.conf.d/max_disk_use.conf [172.31.146.232] + cat [172.31.146.232] [Journal] [172.31.146.232] SystemMaxUse=5G [172.31.145.202] + cat [172.31.146.232] + sudo systemctl force-reload systemd-journald [172.31.145.202] [Journal] [172.31.145.202] SystemMaxUse=5G [172.31.145.202] + sudo tee /etc/systemd/journald.conf.d/max_disk_use.conf [172.31.145.202] + sudo systemctl force-reload systemd-journald [172.31.146.232] + yum_proxy= [172.31.146.232] + grep -v '#kubeone' /etc/yum.conf [172.31.146.232] + echo -n '' [172.31.146.232] + sudo mv /tmp/yum.conf /etc/yum.conf [172.31.146.232] + repo_migration_needed=false [172.31.146.232] + sudo grep -q packages.cloud.google.com /etc/yum.repos.d/kubernetes.repo [172.31.146.232] grep: /etc/yum.repos.d/kubernetes.repo: No such file or directory [172.31.146.232] + cat [172.31.146.232] + sudo tee /etc/yum.repos.d/kubernetes.repo [172.31.146.232] [kubernetes] [172.31.146.232] name=Kubernetes [172.31.146.232] baseurl=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/ [172.31.146.232] enabled=1 [172.31.146.232] gpgcheck=1 [172.31.146.232] gpgkey=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key [172.31.146.232] + [[ false == \t\r\u\e ]] [172.31.146.232] + sudo yum install -y yum-plugin-versionlock device-mapper-persistent-data lvm2 conntrack-tools ebtables socat iproute-tc rsync [172.31.145.202] + yum_proxy= [172.31.145.202] + grep -v '#kubeone' /etc/yum.conf [172.31.145.202] + echo -n '' [172.31.145.202] + sudo mv /tmp/yum.conf /etc/yum.conf [172.31.145.202] + repo_migration_needed=false [172.31.145.202] + sudo grep -q packages.cloud.google.com /etc/yum.repos.d/kubernetes.repo [172.31.145.202] grep: /etc/yum.repos.d/kubernetes.repo: No such file or directory [172.31.145.202] + cat [172.31.145.202] + sudo tee /etc/yum.repos.d/kubernetes.repo [172.31.145.202] [kubernetes] [172.31.145.202] name=Kubernetes [172.31.145.202] baseurl=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/ [172.31.145.202] enabled=1 [172.31.145.202] gpgcheck=1 [172.31.145.202] gpgkey=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key [172.31.145.202] + [[ false == \t\r\u\e ]] [172.31.145.202] + sudo yum install -y yum-plugin-versionlock device-mapper-persistent-data lvm2 conntrack-tools ebtables socat iproute-tc rsync [172.31.147.179] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd [172.31.147.179] Existing lock /var/run/yum.pid: another copy is running as pid 2387. [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 139 M RSS (373 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:16:33 2023 - 00:20 ago [172.31.147.179] State : Uninterruptible, pid: 2387 [172.31.146.232] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd [172.31.145.202] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd [172.31.146.232] Existing lock /var/run/yum.pid: another copy is running as pid 2386. [172.31.146.232] Another app is currently holding the yum lock; waiting for it to exit... [172.31.146.232] The other application is: yum [172.31.146.232] Memory : 141 M RSS (451 MB VSZ) [172.31.146.232] Started: Mon Sep 11 16:16:35 2023 - 00:18 ago [172.31.146.232] State : Sleeping, pid: 2386 [172.31.145.202] Existing lock /var/run/yum.pid: another copy is running as pid 2383. [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 129 M RSS (345 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:16:36 2023 - 00:17 ago [172.31.145.202] State : Sleeping, pid: 2383 [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 139 M RSS (373 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:16:33 2023 - 00:22 ago [172.31.147.179] State : Running, pid: 2387 [172.31.146.232] Another app is currently holding the yum lock; waiting for it to exit... [172.31.146.232] The other application is: yum [172.31.146.232] Memory : 141 M RSS (451 MB VSZ) [172.31.146.232] Started: Mon Sep 11 16:16:35 2023 - 00:20 ago [172.31.146.232] State : Sleeping, pid: 2386 [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 140 M RSS (374 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:16:36 2023 - 00:19 ago [172.31.145.202] State : Running, pid: 2383 [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 139 M RSS (373 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:16:33 2023 - 00:24 ago [172.31.147.179] State : Sleeping, pid: 2387 [172.31.146.232] Another app is currently holding the yum lock; waiting for it to exit... [172.31.146.232] The other application is: yum [172.31.146.232] Memory : 141 M RSS (451 MB VSZ) [172.31.146.232] Started: Mon Sep 11 16:16:35 2023 - 00:22 ago [172.31.146.232] State : Sleeping, pid: 2386 [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 140 M RSS (374 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:16:36 2023 - 00:21 ago [172.31.145.202] State : Running, pid: 2383 [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 139 M RSS (373 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:16:33 2023 - 00:26 ago [172.31.147.179] State : Sleeping, pid: 2387 [172.31.146.232] Another app is currently holding the yum lock; waiting for it to exit... [172.31.146.232] The other application is: yum [172.31.146.232] Memory : 141 M RSS (451 MB VSZ) [172.31.146.232] Started: Mon Sep 11 16:16:35 2023 - 00:24 ago [172.31.146.232] State : Uninterruptible, pid: 2386 [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 140 M RSS (374 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:16:36 2023 - 00:23 ago [172.31.145.202] State : Running, pid: 2383 [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 139 M RSS (373 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:16:33 2023 - 00:28 ago [172.31.147.179] State : Sleeping, pid: 2387 [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 140 M RSS (374 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:16:36 2023 - 00:25 ago [172.31.145.202] State : Sleeping, pid: 2383 [172.31.146.232] 3 packages excluded due to repository priority protections [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 139 M RSS (373 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:16:33 2023 - 00:30 ago [172.31.147.179] State : Sleeping, pid: 2387 [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 140 M RSS (374 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:16:36 2023 - 00:27 ago [172.31.145.202] State : Sleeping, pid: 2383 [172.31.146.232] Package device-mapper-persistent-data-0.7.3-3.amzn2.x86_64 already installed and latest version [172.31.146.232] Package 7:lvm2-2.02.187-6.amzn2.5.x86_64 already installed and latest version [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 139 M RSS (373 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:16:33 2023 - 00:32 ago [172.31.147.179] State : Running, pid: 2387 [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 140 M RSS (374 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:16:36 2023 - 00:29 ago [172.31.145.202] State : Sleeping, pid: 2383 [172.31.147.179] Existing lock /var/run/yum.pid: another copy is running as pid 2736. [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 69 M RSS (361 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:17:06 2023 - 00:01 ago [172.31.147.179] State : Running, pid: 2736 [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 140 M RSS (374 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:16:36 2023 - 00:31 ago [172.31.145.202] State : Sleeping, pid: 2383 [172.31.146.232] Package rsync-3.1.2-11.amzn2.0.2.x86_64 already installed and latest version [172.31.146.232] Resolving Dependencies [172.31.146.232] --> Running transaction check [172.31.146.232] ---> Package conntrack-tools.x86_64 0:1.4.4-5.amzn2.2 will be installed [172.31.146.232] --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.146.232] --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.146.232] --> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.146.232] --> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.146.232] --> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.146.232] --> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.146.232] ---> Package ebtables.x86_64 0:2.0.10-16.amzn2.0.1 will be installed [172.31.146.232] ---> Package iproute-tc.x86_64 0:5.10.0-2.amzn2.0.3 will be installed [172.31.146.232] ---> Package socat.x86_64 0:1.7.3.2-2.amzn2.0.1 will be installed [172.31.146.232] ---> Package yum-plugin-versionlock.noarch 0:1.1.31-46.amzn2.0.1 will be installed [172.31.146.232] --> Running transaction check [172.31.146.232] ---> Package libnetfilter_cthelper.x86_64 0:1.0.0-10.amzn2.1 will be installed [172.31.146.232] ---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.amzn2.1 will be installed [172.31.146.232] ---> Package libnetfilter_queue.x86_64 0:1.0.2-2.amzn2.0.2 will be installed [172.31.146.232] --> Finished Dependency Resolution [172.31.146.232] [172.31.146.232] Dependencies Resolved [172.31.146.232] [172.31.146.232] ================================================================================ [172.31.146.232] Package Arch Version Repository Size [172.31.146.232] ================================================================================ [172.31.146.232] Installing: [172.31.146.232] conntrack-tools x86_64 1.4.4-5.amzn2.2 amzn2-core 186 k [172.31.146.232] ebtables x86_64 2.0.10-16.amzn2.0.1 amzn2-core 122 k [172.31.146.232] iproute-tc x86_64 5.10.0-2.amzn2.0.3 amzn2-core 432 k [172.31.146.232] socat x86_64 1.7.3.2-2.amzn2.0.1 amzn2-core 291 k [172.31.146.232] yum-plugin-versionlock noarch 1.1.31-46.amzn2.0.1 amzn2-core 33 k [172.31.146.232] Installing for dependencies: [172.31.146.232] libnetfilter_cthelper x86_64 1.0.0-10.amzn2.1 amzn2-core 18 k [172.31.146.232] libnetfilter_cttimeout x86_64 1.0.0-6.amzn2.1 amzn2-core 18 k [172.31.146.232] libnetfilter_queue x86_64 1.0.2-2.amzn2.0.2 amzn2-core 24 k [172.31.146.232] [172.31.146.232] Transaction Summary [172.31.146.232] ================================================================================ [172.31.146.232] Install 5 Packages (+3 Dependent packages) [172.31.146.232] [172.31.146.232] Total download size: 1.1 M [172.31.146.232] Installed size: 2.9 M [172.31.146.232] Downloading packages: [172.31.146.232] -------------------------------------------------------------------------------- [172.31.146.232] Total 5.9 MB/s | 1.1 MB 00:00 [172.31.146.232] Running transaction check [172.31.146.232] Running transaction test [172.31.146.232] Transaction test succeeded [172.31.146.232] Running transaction [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 168 M RSS (459 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:17:06 2023 - 00:03 ago [172.31.147.179] State : Running, pid: 2736 [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 140 M RSS (374 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:16:36 2023 - 00:33 ago [172.31.145.202] State : Running, pid: 2383 [172.31.146.232] Installing : libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64 1/8 [172.31.146.232] Installing : libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64 2/8 [172.31.146.232] Installing : libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64 3/8 [172.31.146.232] Installing : conntrack-tools-1.4.4-5.amzn2.2.x86_64 4/8 [172.31.146.232] Installing : iproute-tc-5.10.0-2.amzn2.0.3.x86_64 5/8 [172.31.146.232] Installing : yum-plugin-versionlock-1.1.31-46.amzn2.0.1.noarch 6/8 [172.31.146.232] Installing : ebtables-2.0.10-16.amzn2.0.1.x86_64 7/8 [172.31.146.232] Installing : socat-1.7.3.2-2.amzn2.0.1.x86_64 8/8 [172.31.146.232] Verifying : socat-1.7.3.2-2.amzn2.0.1.x86_64 1/8 [172.31.146.232] Verifying : ebtables-2.0.10-16.amzn2.0.1.x86_64 2/8 [172.31.146.232] Verifying : yum-plugin-versionlock-1.1.31-46.amzn2.0.1.noarch 3/8 [172.31.146.232] Verifying : libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64 4/8 [172.31.146.232] Verifying : conntrack-tools-1.4.4-5.amzn2.2.x86_64 5/8 [172.31.146.232] Verifying : libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64 6/8 [172.31.146.232] Verifying : libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64 7/8 [172.31.146.232] Verifying : iproute-tc-5.10.0-2.amzn2.0.3.x86_64 8/8 [172.31.146.232] [172.31.146.232] Installed: [172.31.146.232] conntrack-tools.x86_64 0:1.4.4-5.amzn2.2 [172.31.146.232] ebtables.x86_64 0:2.0.10-16.amzn2.0.1 [172.31.146.232] iproute-tc.x86_64 0:5.10.0-2.amzn2.0.3 [172.31.146.232] socat.x86_64 0:1.7.3.2-2.amzn2.0.1 [172.31.146.232] yum-plugin-versionlock.noarch 0:1.1.31-46.amzn2.0.1 [172.31.146.232] [172.31.146.232] Dependency Installed: [172.31.146.232] libnetfilter_cthelper.x86_64 0:1.0.0-10.amzn2.1 [172.31.146.232] libnetfilter_cttimeout.x86_64 0:1.0.0-6.amzn2.1 [172.31.146.232] libnetfilter_queue.x86_64 0:1.0.2-2.amzn2.0.2 [172.31.146.232] [172.31.146.232] Complete! [172.31.146.232] + sudo yum versionlock delete containerd [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 182 M RSS (474 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:17:06 2023 - 00:05 ago [172.31.147.179] State : Running, pid: 2736 [172.31.146.232] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.146.232] : versionlock [172.31.146.232] Error: Error: versionlock delete: no matches [172.31.146.232] + true [172.31.146.232] + sudo yum install -y 'containerd-1.6.*' [172.31.145.202] Existing lock /var/run/yum.pid: another copy is running as pid 2732. [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 47 M RSS (339 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:17:10 2023 - 00:01 ago [172.31.145.202] State : Running, pid: 2732 [172.31.146.232] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.146.232] : versionlock [172.31.146.232] 3 packages excluded due to repository priority protections [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 166 M RSS (457 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:17:10 2023 - 00:03 ago [172.31.145.202] State : Running, pid: 2732 [172.31.146.232] Resolving Dependencies [172.31.146.232] --> Running transaction check [172.31.146.232] ---> Package containerd.x86_64 0:1.6.19-1.amzn2.0.3 will be installed [172.31.146.232] --> Processing Dependency: runc for package: containerd-1.6.19-1.amzn2.0.3.x86_64 [172.31.146.232] --> Running transaction check [172.31.146.232] ---> Package runc.x86_64 0:1.1.7-3.amzn2 will be installed [172.31.146.232] --> Finished Dependency Resolution [172.31.146.232] [172.31.146.232] Dependencies Resolved [172.31.147.179] 3 packages excluded due to repository priority protections [172.31.146.232] [172.31.146.232] ================================================================================ [172.31.146.232] Package Arch Version Repository Size [172.31.146.232] ================================================================================ [172.31.146.232] Installing: [172.31.146.232] containerd x86_64 1.6.19-1.amzn2.0.3 amzn2extra-docker 28 M [172.31.146.232] Installing for dependencies: [172.31.146.232] runc x86_64 1.1.7-3.amzn2 amzn2extra-docker 3.0 M [172.31.146.232] [172.31.146.232] Transaction Summary [172.31.146.232] ================================================================================ [172.31.146.232] Install 1 Package (+1 Dependent package) [172.31.146.232] [172.31.146.232] Total download size: 31 M [172.31.146.232] Installed size: 111 M [172.31.146.232] Downloading packages: [172.31.146.232] -------------------------------------------------------------------------------- [172.31.146.232] Total 61 MB/s | 31 MB 00:00 [172.31.146.232] Running transaction check [172.31.146.232] Running transaction test [172.31.146.232] Transaction test succeeded [172.31.146.232] Running transaction [172.31.147.179] Package device-mapper-persistent-data-0.7.3-3.amzn2.x86_64 already installed and latest version [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 182 M RSS (474 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:17:10 2023 - 00:05 ago [172.31.145.202] State : Running, pid: 2732 [172.31.147.179] Package 7:lvm2-2.02.187-6.amzn2.5.x86_64 already installed and latest version [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 183 M RSS (475 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:17:10 2023 - 00:07 ago [172.31.145.202] State : Running, pid: 2732 [172.31.147.179] Package rsync-3.1.2-11.amzn2.0.2.x86_64 already installed and latest version [172.31.147.179] Resolving Dependencies [172.31.147.179] --> Running transaction check [172.31.147.179] ---> Package conntrack-tools.x86_64 0:1.4.4-5.amzn2.2 will be installed [172.31.147.179] --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.147.179] --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.147.179] --> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.147.179] --> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.147.179] --> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.147.179] --> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.147.179] ---> Package ebtables.x86_64 0:2.0.10-16.amzn2.0.1 will be installed [172.31.147.179] ---> Package iproute-tc.x86_64 0:5.10.0-2.amzn2.0.3 will be installed [172.31.147.179] ---> Package socat.x86_64 0:1.7.3.2-2.amzn2.0.1 will be installed [172.31.147.179] ---> Package yum-plugin-versionlock.noarch 0:1.1.31-46.amzn2.0.1 will be installed [172.31.147.179] --> Running transaction check [172.31.147.179] ---> Package libnetfilter_cthelper.x86_64 0:1.0.0-10.amzn2.1 will be installed [172.31.147.179] ---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.amzn2.1 will be installed [172.31.147.179] ---> Package libnetfilter_queue.x86_64 0:1.0.2-2.amzn2.0.2 will be installed [172.31.147.179] --> Finished Dependency Resolution [172.31.147.179] [172.31.147.179] Dependencies Resolved [172.31.147.179] [172.31.147.179] ================================================================================ [172.31.147.179] Package Arch Version Repository Size [172.31.147.179] ================================================================================ [172.31.147.179] Installing: [172.31.147.179] conntrack-tools x86_64 1.4.4-5.amzn2.2 amzn2-core 186 k [172.31.147.179] ebtables x86_64 2.0.10-16.amzn2.0.1 amzn2-core 122 k [172.31.147.179] iproute-tc x86_64 5.10.0-2.amzn2.0.3 amzn2-core 432 k [172.31.147.179] socat x86_64 1.7.3.2-2.amzn2.0.1 amzn2-core 291 k [172.31.147.179] yum-plugin-versionlock noarch 1.1.31-46.amzn2.0.1 amzn2-core 33 k [172.31.147.179] Installing for dependencies: [172.31.147.179] libnetfilter_cthelper x86_64 1.0.0-10.amzn2.1 amzn2-core 18 k [172.31.147.179] libnetfilter_cttimeout x86_64 1.0.0-6.amzn2.1 amzn2-core 18 k [172.31.147.179] libnetfilter_queue x86_64 1.0.2-2.amzn2.0.2 amzn2-core 24 k [172.31.147.179] [172.31.147.179] Transaction Summary [172.31.147.179] ================================================================================ [172.31.147.179] Install 5 Packages (+3 Dependent packages) [172.31.147.179] [172.31.147.179] Total download size: 1.1 M [172.31.147.179] Installed size: 2.9 M [172.31.147.179] Downloading packages: [172.31.147.179] -------------------------------------------------------------------------------- [172.31.147.179] Total 6.3 MB/s | 1.1 MB 00:00 [172.31.147.179] Running transaction check [172.31.147.179] Running transaction test [172.31.147.179] Transaction test succeeded [172.31.147.179] Running transaction [172.31.146.232] Installing : runc-1.1.7-3.amzn2.x86_64 1/2 [172.31.146.232] Installing : containerd-1.6.19-1.amzn2.0.3.x86_64 2/2 [172.31.146.232] Verifying : containerd-1.6.19-1.amzn2.0.3.x86_64 1/2 [172.31.147.179] Installing : libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64 1/8 [172.31.147.179] Installing : libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64 2/8 [172.31.146.232] Verifying : runc-1.1.7-3.amzn2.x86_64 2/2 [172.31.146.232] [172.31.146.232] Installed: [172.31.146.232] containerd.x86_64 0:1.6.19-1.amzn2.0.3 [172.31.146.232] [172.31.146.232] Dependency Installed: [172.31.146.232] runc.x86_64 0:1.1.7-3.amzn2 [172.31.146.232] [172.31.146.232] Complete! [172.31.147.179] Installing : libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64 3/8 [172.31.146.232] + sudo yum versionlock add containerd [172.31.147.179] Installing : conntrack-tools-1.4.4-5.amzn2.2.x86_64 4/8 [172.31.147.179] Installing : iproute-tc-5.10.0-2.amzn2.0.3.x86_64 5/8 [172.31.147.179] Installing : yum-plugin-versionlock-1.1.31-46.amzn2.0.1.noarch 6/8 [172.31.147.179] Installing : ebtables-2.0.10-16.amzn2.0.1.x86_64 7/8 [172.31.147.179] Installing : socat-1.7.3.2-2.amzn2.0.1.x86_64 8/8 [172.31.147.179] Verifying : socat-1.7.3.2-2.amzn2.0.1.x86_64 1/8 [172.31.147.179] Verifying : ebtables-2.0.10-16.amzn2.0.1.x86_64 2/8 [172.31.147.179] Verifying : yum-plugin-versionlock-1.1.31-46.amzn2.0.1.noarch 3/8 [172.31.147.179] Verifying : libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64 4/8 [172.31.147.179] Verifying : conntrack-tools-1.4.4-5.amzn2.2.x86_64 5/8 [172.31.147.179] Verifying : libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64 6/8 [172.31.147.179] Verifying : libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64 7/8 [172.31.147.179] Verifying : iproute-tc-5.10.0-2.amzn2.0.3.x86_64 8/8 [172.31.147.179] [172.31.147.179] Installed: [172.31.147.179] conntrack-tools.x86_64 0:1.4.4-5.amzn2.2 [172.31.147.179] ebtables.x86_64 0:2.0.10-16.amzn2.0.1 [172.31.147.179] iproute-tc.x86_64 0:5.10.0-2.amzn2.0.3 [172.31.147.179] socat.x86_64 0:1.7.3.2-2.amzn2.0.1 [172.31.147.179] yum-plugin-versionlock.noarch 0:1.1.31-46.amzn2.0.1 [172.31.147.179] [172.31.147.179] Dependency Installed: [172.31.147.179] libnetfilter_cthelper.x86_64 0:1.0.0-10.amzn2.1 [172.31.147.179] libnetfilter_cttimeout.x86_64 0:1.0.0-6.amzn2.1 [172.31.147.179] libnetfilter_queue.x86_64 0:1.0.2-2.amzn2.0.2 [172.31.147.179] [172.31.147.179] Complete! [172.31.145.202] 3 packages excluded due to repository priority protections [172.31.147.179] + sudo yum versionlock delete containerd [172.31.146.232] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.146.232] : versionlock [172.31.146.232] Adding versionlock on: 0:containerd-1.6.19-1.amzn2.0.3 [172.31.146.232] versionlock added: 1 [172.31.146.232] ++ dirname /etc/containerd/config.toml [172.31.146.232] + sudo mkdir -p /etc/containerd [172.31.146.232] + sudo touch /etc/containerd/config.toml [172.31.146.232] + sudo chmod 600 /etc/containerd/config.toml [172.31.146.232] + cat [172.31.146.232] version = 2 [172.31.146.232] [172.31.146.232] [metrics] [172.31.146.232] address = "127.0.0.1:1338" [172.31.146.232] [172.31.146.232] [plugins] [172.31.146.232] [plugins."io.containerd.grpc.v1.cri"] [172.31.146.232] sandbox_image = "registry.k8s.io/pause:3.9" [172.31.146.232] [plugins."io.containerd.grpc.v1.cri".containerd] [172.31.146.232] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] [172.31.146.232] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] [172.31.146.232] runtime_type = "io.containerd.runc.v2" [172.31.146.232] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] [172.31.146.232] SystemdCgroup = true [172.31.146.232] [plugins."io.containerd.grpc.v1.cri".registry] [172.31.146.232] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [172.31.146.232] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] [172.31.146.232] endpoint = ["https://registry-1.docker.io"] [172.31.146.232] [172.31.146.232] + sudo tee /etc/containerd/config.toml [172.31.146.232] + cat [172.31.146.232] runtime-endpoint: unix:///run/containerd/containerd.sock [172.31.146.232] + sudo tee /etc/crictl.yaml [172.31.146.232] + sudo systemctl daemon-reload [172.31.146.232] + sudo systemctl enable containerd [172.31.146.232] Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service. [172.31.146.232] + sudo systemctl restart containerd [172.31.147.179] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.147.179] : versionlock [172.31.147.179] Error: Error: versionlock delete: no matches [172.31.147.179] + true [172.31.147.179] + sudo yum install -y 'containerd-1.6.*' [172.31.146.232] + sudo mkdir -p /opt/bin /etc/kubernetes/pki /etc/kubernetes/manifests [172.31.146.232] + rm -rf /tmp/k8s-binaries [172.31.146.232] + mkdir -p /tmp/k8s-binaries [172.31.146.232] + cd /tmp/k8s-binaries [172.31.146.232] + sudo yum install -y kubelet-1.27.5 kubeadm-1.27.5 kubectl-1.27.5 kubernetes-cni-1.2.0 cri-tools-1.27.1 [172.31.145.202] Package device-mapper-persistent-data-0.7.3-3.amzn2.x86_64 already installed and latest version [172.31.145.202] Package 7:lvm2-2.02.187-6.amzn2.5.x86_64 already installed and latest version [172.31.147.179] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.147.179] : versionlock [172.31.146.232] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.146.232] : versionlock [172.31.146.232] Existing lock /var/run/yum.pid: another copy is running as pid 2912. [172.31.146.232] Another app is currently holding the yum lock; waiting for it to exit... [172.31.146.232] The other application is: yum [172.31.146.232] Memory : 68 M RSS (285 MB VSZ) [172.31.146.232] Started: Mon Sep 11 16:17:21 2023 - 00:01 ago [172.31.146.232] State : Running, pid: 2912 [172.31.147.179] 3 packages excluded due to repository priority protections [172.31.147.179] Resolving Dependencies [172.31.147.179] --> Running transaction check [172.31.147.179] ---> Package containerd.x86_64 0:1.6.19-1.amzn2.0.3 will be installed [172.31.147.179] --> Processing Dependency: runc for package: containerd-1.6.19-1.amzn2.0.3.x86_64 [172.31.147.179] --> Running transaction check [172.31.147.179] ---> Package runc.x86_64 0:1.1.7-3.amzn2 will be installed [172.31.147.179] --> Finished Dependency Resolution [172.31.146.232] Another app is currently holding the yum lock; waiting for it to exit... [172.31.146.232] The other application is: yum [172.31.146.232] Memory : 169 M RSS (385 MB VSZ) [172.31.146.232] Started: Mon Sep 11 16:17:21 2023 - 00:03 ago [172.31.146.232] State : Running, pid: 2912 [172.31.147.179] [172.31.147.179] Dependencies Resolved [172.31.147.179] [172.31.147.179] ================================================================================ [172.31.147.179] Package Arch Version Repository Size [172.31.147.179] ================================================================================ [172.31.147.179] Installing: [172.31.147.179] containerd x86_64 1.6.19-1.amzn2.0.3 amzn2extra-docker 28 M [172.31.147.179] Installing for dependencies: [172.31.147.179] runc x86_64 1.1.7-3.amzn2 amzn2extra-docker 3.0 M [172.31.147.179] [172.31.147.179] Transaction Summary [172.31.147.179] ================================================================================ [172.31.147.179] Install 1 Package (+1 Dependent package) [172.31.147.179] [172.31.147.179] Total download size: 31 M [172.31.147.179] Installed size: 111 M [172.31.147.179] Downloading packages: [172.31.145.202] Package rsync-3.1.2-11.amzn2.0.2.x86_64 already installed and latest version [172.31.145.202] Resolving Dependencies [172.31.145.202] --> Running transaction check [172.31.145.202] ---> Package conntrack-tools.x86_64 0:1.4.4-5.amzn2.2 will be installed [172.31.145.202] --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.145.202] --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.145.202] --> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.145.202] --> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.145.202] --> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.145.202] --> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-5.amzn2.2.x86_64 [172.31.145.202] ---> Package ebtables.x86_64 0:2.0.10-16.amzn2.0.1 will be installed [172.31.145.202] ---> Package iproute-tc.x86_64 0:5.10.0-2.amzn2.0.3 will be installed [172.31.145.202] ---> Package socat.x86_64 0:1.7.3.2-2.amzn2.0.1 will be installed [172.31.145.202] ---> Package yum-plugin-versionlock.noarch 0:1.1.31-46.amzn2.0.1 will be installed [172.31.145.202] --> Running transaction check [172.31.145.202] ---> Package libnetfilter_cthelper.x86_64 0:1.0.0-10.amzn2.1 will be installed [172.31.145.202] ---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.amzn2.1 will be installed [172.31.145.202] ---> Package libnetfilter_queue.x86_64 0:1.0.2-2.amzn2.0.2 will be installed [172.31.147.179] -------------------------------------------------------------------------------- [172.31.147.179] Total 60 MB/s | 31 MB 00:00 [172.31.147.179] Running transaction check [172.31.147.179] Running transaction test [172.31.147.179] Transaction test succeeded [172.31.147.179] Running transaction [172.31.145.202] --> Finished Dependency Resolution [172.31.145.202] [172.31.145.202] Dependencies Resolved [172.31.145.202] [172.31.145.202] ================================================================================ [172.31.145.202] Package Arch Version Repository Size [172.31.145.202] ================================================================================ [172.31.145.202] Installing: [172.31.145.202] conntrack-tools x86_64 1.4.4-5.amzn2.2 amzn2-core 186 k [172.31.145.202] ebtables x86_64 2.0.10-16.amzn2.0.1 amzn2-core 122 k [172.31.145.202] iproute-tc x86_64 5.10.0-2.amzn2.0.3 amzn2-core 432 k [172.31.145.202] socat x86_64 1.7.3.2-2.amzn2.0.1 amzn2-core 291 k [172.31.145.202] yum-plugin-versionlock noarch 1.1.31-46.amzn2.0.1 amzn2-core 33 k [172.31.145.202] Installing for dependencies: [172.31.145.202] libnetfilter_cthelper x86_64 1.0.0-10.amzn2.1 amzn2-core 18 k [172.31.145.202] libnetfilter_cttimeout x86_64 1.0.0-6.amzn2.1 amzn2-core 18 k [172.31.145.202] libnetfilter_queue x86_64 1.0.2-2.amzn2.0.2 amzn2-core 24 k [172.31.145.202] [172.31.145.202] Transaction Summary [172.31.145.202] ================================================================================ [172.31.145.202] Install 5 Packages (+3 Dependent packages) [172.31.145.202] [172.31.145.202] Total download size: 1.1 M [172.31.145.202] Installed size: 2.9 M [172.31.145.202] Downloading packages: [172.31.145.202] -------------------------------------------------------------------------------- [172.31.145.202] Total 6.0 MB/s | 1.1 MB 00:00 [172.31.145.202] Running transaction check [172.31.145.202] Running transaction test [172.31.145.202] Transaction test succeeded [172.31.145.202] Running transaction [172.31.146.232] Another app is currently holding the yum lock; waiting for it to exit... [172.31.146.232] The other application is: yum [172.31.146.232] Memory : 184 M RSS (400 MB VSZ) [172.31.146.232] Started: Mon Sep 11 16:17:21 2023 - 00:05 ago [172.31.146.232] State : Running, pid: 2912 [172.31.145.202] Installing : libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64 1/8 [172.31.145.202] Installing : libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64 2/8 [172.31.145.202] Installing : libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64 3/8 [172.31.145.202] Installing : conntrack-tools-1.4.4-5.amzn2.2.x86_64 4/8 [172.31.145.202] Installing : iproute-tc-5.10.0-2.amzn2.0.3.x86_64 5/8 [172.31.145.202] Installing : yum-plugin-versionlock-1.1.31-46.amzn2.0.1.noarch 6/8 [172.31.145.202] Installing : ebtables-2.0.10-16.amzn2.0.1.x86_64 7/8 [172.31.145.202] Installing : socat-1.7.3.2-2.amzn2.0.1.x86_64 8/8 [172.31.145.202] Verifying : socat-1.7.3.2-2.amzn2.0.1.x86_64 1/8 [172.31.145.202] Verifying : ebtables-2.0.10-16.amzn2.0.1.x86_64 2/8 [172.31.145.202] Verifying : yum-plugin-versionlock-1.1.31-46.amzn2.0.1.noarch 3/8 [172.31.145.202] Verifying : libnetfilter_queue-1.0.2-2.amzn2.0.2.x86_64 4/8 [172.31.145.202] Verifying : conntrack-tools-1.4.4-5.amzn2.2.x86_64 5/8 [172.31.145.202] Verifying : libnetfilter_cttimeout-1.0.0-6.amzn2.1.x86_64 6/8 [172.31.145.202] Verifying : libnetfilter_cthelper-1.0.0-10.amzn2.1.x86_64 7/8 [172.31.145.202] Verifying : iproute-tc-5.10.0-2.amzn2.0.3.x86_64 8/8 [172.31.145.202] [172.31.145.202] Installed: [172.31.145.202] conntrack-tools.x86_64 0:1.4.4-5.amzn2.2 [172.31.145.202] ebtables.x86_64 0:2.0.10-16.amzn2.0.1 [172.31.145.202] iproute-tc.x86_64 0:5.10.0-2.amzn2.0.3 [172.31.145.202] socat.x86_64 0:1.7.3.2-2.amzn2.0.1 [172.31.145.202] yum-plugin-versionlock.noarch 0:1.1.31-46.amzn2.0.1 [172.31.145.202] [172.31.145.202] Dependency Installed: [172.31.145.202] libnetfilter_cthelper.x86_64 0:1.0.0-10.amzn2.1 [172.31.145.202] libnetfilter_cttimeout.x86_64 0:1.0.0-6.amzn2.1 [172.31.145.202] libnetfilter_queue.x86_64 0:1.0.2-2.amzn2.0.2 [172.31.145.202] [172.31.145.202] Complete! [172.31.145.202] + sudo yum versionlock delete containerd [172.31.145.202] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.145.202] : versionlock [172.31.145.202] Error: Error: versionlock delete: no matches [172.31.145.202] + true [172.31.145.202] + sudo yum install -y 'containerd-1.6.*' [172.31.146.232] Another app is currently holding the yum lock; waiting for it to exit... [172.31.146.232] The other application is: yum [172.31.146.232] Memory : 184 M RSS (401 MB VSZ) [172.31.146.232] Started: Mon Sep 11 16:17:21 2023 - 00:07 ago [172.31.146.232] State : Running, pid: 2912 [172.31.145.202] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.145.202] : versionlock [172.31.147.179] Installing : runc-1.1.7-3.amzn2.x86_64 1/2 [172.31.147.179] Installing : containerd-1.6.19-1.amzn2.0.3.x86_64 2/2 [172.31.147.179] Verifying : containerd-1.6.19-1.amzn2.0.3.x86_64 1/2 [172.31.147.179] Verifying : runc-1.1.7-3.amzn2.x86_64 2/2 [172.31.147.179] [172.31.147.179] Installed: [172.31.147.179] containerd.x86_64 0:1.6.19-1.amzn2.0.3 [172.31.147.179] [172.31.147.179] Dependency Installed: [172.31.147.179] runc.x86_64 0:1.1.7-3.amzn2 [172.31.147.179] [172.31.147.179] Complete! [172.31.145.202] 3 packages excluded due to repository priority protections [172.31.147.179] + sudo yum versionlock add containerd [172.31.147.179] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.147.179] : versionlock [172.31.147.179] Adding versionlock on: 0:containerd-1.6.19-1.amzn2.0.3 [172.31.147.179] versionlock added: 1 [172.31.147.179] ++ dirname /etc/containerd/config.toml [172.31.147.179] + sudo mkdir -p /etc/containerd [172.31.147.179] + sudo touch /etc/containerd/config.toml [172.31.147.179] + sudo chmod 600 /etc/containerd/config.toml [172.31.147.179] + cat [172.31.147.179] version = 2 [172.31.147.179] [172.31.147.179] [metrics] [172.31.147.179] address = "127.0.0.1:1338" [172.31.147.179] [172.31.147.179] [plugins] [172.31.147.179] [plugins."io.containerd.grpc.v1.cri"] [172.31.147.179] sandbox_image = "registry.k8s.io/pause:3.9" [172.31.147.179] [plugins."io.containerd.grpc.v1.cri".containerd] [172.31.147.179] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] [172.31.147.179] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] [172.31.147.179] runtime_type = "io.containerd.runc.v2" [172.31.147.179] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] [172.31.147.179] SystemdCgroup = true [172.31.147.179] [plugins."io.containerd.grpc.v1.cri".registry] [172.31.147.179] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [172.31.147.179] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] [172.31.147.179] endpoint = ["https://registry-1.docker.io"] [172.31.147.179] [172.31.145.202] Resolving Dependencies [172.31.147.179] + sudo tee /etc/containerd/config.toml [172.31.147.179] + cat [172.31.147.179] + sudo tee /etc/crictl.yaml [172.31.145.202] --> Running transaction check [172.31.145.202] ---> Package containerd.x86_64 0:1.6.19-1.amzn2.0.3 will be installed [172.31.145.202] --> Processing Dependency: runc for package: containerd-1.6.19-1.amzn2.0.3.x86_64 [172.31.147.179] + sudo systemctl daemon-reload [172.31.147.179] runtime-endpoint: unix:///run/containerd/containerd.sock [172.31.145.202] --> Running transaction check [172.31.145.202] ---> Package runc.x86_64 0:1.1.7-3.amzn2 will be installed [172.31.147.179] + sudo systemctl enable containerd [172.31.147.179] Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service. [172.31.145.202] --> Finished Dependency Resolution [172.31.147.179] + sudo systemctl restart containerd [172.31.145.202] [172.31.145.202] Dependencies Resolved [172.31.145.202] [172.31.145.202] ================================================================================ [172.31.145.202] Package Arch Version Repository Size [172.31.145.202] ================================================================================ [172.31.145.202] Installing: [172.31.145.202] containerd x86_64 1.6.19-1.amzn2.0.3 amzn2extra-docker 28 M [172.31.145.202] Installing for dependencies: [172.31.145.202] runc x86_64 1.1.7-3.amzn2 amzn2extra-docker 3.0 M [172.31.145.202] [172.31.145.202] Transaction Summary [172.31.145.202] ================================================================================ [172.31.145.202] Install 1 Package (+1 Dependent package) [172.31.145.202] [172.31.145.202] Total download size: 31 M [172.31.145.202] Installed size: 111 M [172.31.145.202] Downloading packages: [172.31.147.179] + sudo mkdir -p /opt/bin /etc/kubernetes/pki /etc/kubernetes/manifests [172.31.147.179] + rm -rf /tmp/k8s-binaries [172.31.147.179] + mkdir -p /tmp/k8s-binaries [172.31.147.179] + cd /tmp/k8s-binaries [172.31.147.179] + sudo yum install -y kubelet-1.27.5 kubeadm-1.27.5 kubectl-1.27.5 kubernetes-cni-1.2.0 cri-tools-1.27.1 [172.31.146.232] 3 packages excluded due to repository priority protections [172.31.145.202] -------------------------------------------------------------------------------- [172.31.145.202] Total 60 MB/s | 31 MB 00:00 [172.31.145.202] Running transaction check [172.31.145.202] Running transaction test [172.31.145.202] Transaction test succeeded [172.31.145.202] Running transaction [172.31.147.179] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.147.179] : versionlock [172.31.147.179] Existing lock /var/run/yum.pid: another copy is running as pid 2913. [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 63 M RSS (280 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:17:31 2023 - 00:01 ago [172.31.147.179] State : Running, pid: 2913 [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 170 M RSS (385 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:17:31 2023 - 00:03 ago [172.31.147.179] State : Running, pid: 2913 [172.31.146.232] No package cri-tools-1.27.1 available. [172.31.146.232] Resolving Dependencies [172.31.146.232] --> Running transaction check [172.31.146.232] ---> Package kubeadm.x86_64 0:1.27.5-150500.1.1 will be installed [172.31.146.232] --> Processing Dependency: cri-tools >= 1.25.0 for package: kubeadm-1.27.5-150500.1.1.x86_64 [172.31.146.232] ---> Package kubectl.x86_64 0:1.27.5-150500.1.1 will be installed [172.31.146.232] ---> Package kubelet.x86_64 0:1.27.5-150500.1.1 will be installed [172.31.146.232] ---> Package kubernetes-cni.x86_64 0:1.2.0-150500.2.1 will be installed [172.31.146.232] --> Running transaction check [172.31.146.232] ---> Package cri-tools.x86_64 0:1.26.1-1.amzn2.0.2 will be installed [172.31.146.232] --> Finished Dependency Resolution [172.31.147.179] Another app is currently holding the yum lock; waiting for it to exit... [172.31.147.179] The other application is: yum [172.31.147.179] Memory : 184 M RSS (400 MB VSZ) [172.31.147.179] Started: Mon Sep 11 16:17:31 2023 - 00:05 ago [172.31.147.179] State : Running, pid: 2913 [172.31.145.202] Installing : runc-1.1.7-3.amzn2.x86_64 1/2 [172.31.146.232] [172.31.146.232] Dependencies Resolved [172.31.146.232] [172.31.146.232] ================================================================================ [172.31.146.232] Package Arch Version Repository Size [172.31.146.232] ================================================================================ [172.31.146.232] Installing: [172.31.146.232] kubeadm x86_64 1.27.5-150500.1.1 kubernetes 9.6 M [172.31.146.232] kubectl x86_64 1.27.5-150500.1.1 kubernetes 9.9 M [172.31.146.232] kubelet x86_64 1.27.5-150500.1.1 kubernetes 18 M [172.31.146.232] kubernetes-cni x86_64 1.2.0-150500.2.1 kubernetes 6.2 M [172.31.146.232] Installing for dependencies: [172.31.146.232] cri-tools x86_64 1.26.1-1.amzn2.0.2 amzn2-core 18 M [172.31.146.232] [172.31.146.232] Transaction Summary [172.31.146.232] ================================================================================ [172.31.146.232] Install 4 Packages (+1 Dependent package) [172.31.146.232] [172.31.146.232] Total download size: 62 M [172.31.146.232] Installed size: 316 M [172.31.146.232] Downloading packages: [172.31.145.202] Installing : containerd-1.6.19-1.amzn2.0.3.x86_64 2/2 [172.31.145.202] Verifying : containerd-1.6.19-1.amzn2.0.3.x86_64 1/2 [172.31.145.202] Verifying : runc-1.1.7-3.amzn2.x86_64 2/2 [172.31.145.202] [172.31.145.202] Installed: [172.31.145.202] containerd.x86_64 0:1.6.19-1.amzn2.0.3 [172.31.145.202] [172.31.145.202] Dependency Installed: [172.31.145.202] runc.x86_64 0:1.1.7-3.amzn2 [172.31.145.202] [172.31.145.202] Complete! [172.31.145.202] + sudo yum versionlock add containerd [172.31.146.232] warning: /var/cache/yum/x86_64/2/kubernetes/packages/kubeadm-1.27.5-150500.1.1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 9a296436: NOKEY [172.31.146.232] Public key for kubeadm-1.27.5-150500.1.1.x86_64.rpm is not installed [172.31.146.232] -------------------------------------------------------------------------------- [172.31.146.232] Total 60 MB/s | 62 MB 00:01 [172.31.146.232] Retrieving key from https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key [172.31.146.232] Importing GPG key 0x9A296436: [172.31.146.232] Userid : "isv:kubernetes OBS Project " [172.31.146.232] Fingerprint: de15 b144 86cd 377b 9e87 6e1a 2346 54da 9a29 6436 [172.31.146.232] From : https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key [172.31.146.232] Running transaction check [172.31.146.232] Running transaction test [172.31.145.202] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.145.202] : versionlock [172.31.146.232] Transaction test succeeded [172.31.146.232] Running transaction [172.31.145.202] Adding versionlock on: 0:containerd-1.6.19-1.amzn2.0.3 [172.31.145.202] versionlock added: 1 [172.31.145.202] ++ dirname /etc/containerd/config.toml [172.31.145.202] + sudo mkdir -p /etc/containerd [172.31.145.202] + sudo touch /etc/containerd/config.toml [172.31.145.202] + sudo chmod 600 /etc/containerd/config.toml [172.31.145.202] + cat [172.31.145.202] version = 2 [172.31.145.202] [172.31.145.202] [metrics] [172.31.145.202] address = "127.0.0.1:1338" [172.31.145.202] [172.31.145.202] [plugins] [172.31.145.202] [plugins."io.containerd.grpc.v1.cri"] [172.31.145.202] sandbox_image = "registry.k8s.io/pause:3.9" [172.31.145.202] [plugins."io.containerd.grpc.v1.cri".containerd] [172.31.145.202] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes] [172.31.145.202] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc] [172.31.145.202] runtime_type = "io.containerd.runc.v2" [172.31.145.202] [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] [172.31.145.202] SystemdCgroup = true [172.31.145.202] [plugins."io.containerd.grpc.v1.cri".registry] [172.31.145.202] [plugins."io.containerd.grpc.v1.cri".registry.mirrors] [172.31.145.202] [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] [172.31.145.202] endpoint = ["https://registry-1.docker.io"] [172.31.145.202] [172.31.145.202] + sudo tee /etc/containerd/config.toml [172.31.145.202] + cat [172.31.145.202] + sudo tee /etc/crictl.yaml [172.31.145.202] + sudo systemctl daemon-reload [172.31.145.202] runtime-endpoint: unix:///run/containerd/containerd.sock [172.31.145.202] + sudo systemctl enable containerd [172.31.145.202] Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service. [172.31.145.202] + sudo systemctl restart containerd [172.31.145.202] + sudo mkdir -p /opt/bin /etc/kubernetes/pki /etc/kubernetes/manifests [172.31.145.202] + rm -rf /tmp/k8s-binaries [172.31.145.202] + mkdir -p /tmp/k8s-binaries [172.31.145.202] + cd /tmp/k8s-binaries [172.31.145.202] + sudo yum install -y kubelet-1.27.5 kubeadm-1.27.5 kubectl-1.27.5 kubernetes-cni-1.2.0 cri-tools-1.27.1 [172.31.145.202] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.145.202] : versionlock [172.31.145.202] Existing lock /var/run/yum.pid: another copy is running as pid 2891. [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 74 M RSS (290 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:17:37 2023 - 00:02 ago [172.31.145.202] State : Running, pid: 2891 [172.31.147.179] 3 packages excluded due to repository priority protections [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 169 M RSS (385 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:17:37 2023 - 00:04 ago [172.31.145.202] State : Running, pid: 2891 [172.31.147.179] No package cri-tools-1.27.1 available. [172.31.146.232] Installing : kubernetes-cni-1.2.0-150500.2.1.x86_64 1/5 [172.31.147.179] Resolving Dependencies [172.31.147.179] --> Running transaction check [172.31.147.179] ---> Package kubeadm.x86_64 0:1.27.5-150500.1.1 will be installed [172.31.147.179] --> Processing Dependency: cri-tools >= 1.25.0 for package: kubeadm-1.27.5-150500.1.1.x86_64 [172.31.147.179] ---> Package kubectl.x86_64 0:1.27.5-150500.1.1 will be installed [172.31.147.179] ---> Package kubelet.x86_64 0:1.27.5-150500.1.1 will be installed [172.31.147.179] ---> Package kubernetes-cni.x86_64 0:1.2.0-150500.2.1 will be installed [172.31.147.179] --> Running transaction check [172.31.147.179] ---> Package cri-tools.x86_64 0:1.26.1-1.amzn2.0.2 will be installed [172.31.145.202] Another app is currently holding the yum lock; waiting for it to exit... [172.31.145.202] The other application is: yum [172.31.145.202] Memory : 183 M RSS (400 MB VSZ) [172.31.145.202] Started: Mon Sep 11 16:17:37 2023 - 00:06 ago [172.31.145.202] State : Running, pid: 2891 [172.31.147.179] --> Finished Dependency Resolution [172.31.147.179] [172.31.147.179] Dependencies Resolved [172.31.147.179] [172.31.147.179] ================================================================================ [172.31.147.179] Package Arch Version Repository Size [172.31.147.179] ================================================================================ [172.31.147.179] Installing: [172.31.147.179] kubeadm x86_64 1.27.5-150500.1.1 kubernetes 9.6 M [172.31.147.179] kubectl x86_64 1.27.5-150500.1.1 kubernetes 9.9 M [172.31.147.179] kubelet x86_64 1.27.5-150500.1.1 kubernetes 18 M [172.31.147.179] kubernetes-cni x86_64 1.2.0-150500.2.1 kubernetes 6.2 M [172.31.147.179] Installing for dependencies: [172.31.147.179] cri-tools x86_64 1.26.1-1.amzn2.0.2 amzn2-core 18 M [172.31.147.179] [172.31.147.179] Transaction Summary [172.31.147.179] ================================================================================ [172.31.147.179] Install 4 Packages (+1 Dependent package) [172.31.147.179] [172.31.147.179] Total download size: 62 M [172.31.147.179] Installed size: 316 M [172.31.147.179] Downloading packages: [172.31.147.179] warning: /var/cache/yum/x86_64/2/kubernetes/packages/kubeadm-1.27.5-150500.1.1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 9a296436: NOKEY [172.31.147.179] Public key for kubeadm-1.27.5-150500.1.1.x86_64.rpm is not installed [172.31.147.179] -------------------------------------------------------------------------------- [172.31.147.179] Total 88 MB/s | 62 MB 00:00 [172.31.147.179] Retrieving key from https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key [172.31.147.179] Importing GPG key 0x9A296436: [172.31.147.179] Userid : "isv:kubernetes OBS Project " [172.31.147.179] Fingerprint: de15 b144 86cd 377b 9e87 6e1a 2346 54da 9a29 6436 [172.31.147.179] From : https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key [172.31.147.179] Running transaction check [172.31.147.179] Running transaction test [172.31.147.179] Transaction test succeeded [172.31.147.179] Running transaction [172.31.146.232] Installing : kubelet-1.27.5-150500.1.1.x86_64 2/5 [172.31.145.202] 3 packages excluded due to repository priority protections [172.31.146.232] Installing : cri-tools-1.26.1-1.amzn2.0.2.x86_64 3/5 [172.31.147.179] Installing : kubernetes-cni-1.2.0-150500.2.1.x86_64 1/5 [172.31.145.202] No package cri-tools-1.27.1 available. [172.31.146.232] Installing : kubectl-1.27.5-150500.1.1.x86_64 4/5 [172.31.146.232] Installing : kubeadm-1.27.5-150500.1.1.x86_64 5/5 [172.31.146.232] Verifying : kubelet-1.27.5-150500.1.1.x86_64 1/5 [172.31.146.232] Verifying : kubectl-1.27.5-150500.1.1.x86_64 2/5 [172.31.146.232] Verifying : cri-tools-1.26.1-1.amzn2.0.2.x86_64 3/5 [172.31.146.232] Verifying : kubernetes-cni-1.2.0-150500.2.1.x86_64 4/5 [172.31.145.202] Resolving Dependencies [172.31.145.202] --> Running transaction check [172.31.145.202] ---> Package kubeadm.x86_64 0:1.27.5-150500.1.1 will be installed [172.31.145.202] --> Processing Dependency: cri-tools >= 1.25.0 for package: kubeadm-1.27.5-150500.1.1.x86_64 [172.31.146.232] Verifying : kubeadm-1.27.5-150500.1.1.x86_64 5/5 [172.31.146.232] [172.31.146.232] Installed: [172.31.146.232] kubeadm.x86_64 0:1.27.5-150500.1.1 kubectl.x86_64 0:1.27.5-150500.1.1 [172.31.146.232] kubelet.x86_64 0:1.27.5-150500.1.1 kubernetes-cni.x86_64 0:1.2.0-150500.2.1 [172.31.146.232] [172.31.146.232] Dependency Installed: [172.31.146.232] cri-tools.x86_64 0:1.26.1-1.amzn2.0.2 [172.31.146.232] [172.31.145.202] ---> Package kubectl.x86_64 0:1.27.5-150500.1.1 will be installed [172.31.145.202] ---> Package kubelet.x86_64 0:1.27.5-150500.1.1 will be installed [172.31.146.232] Complete! [172.31.145.202] ---> Package kubernetes-cni.x86_64 0:1.2.0-150500.2.1 will be installed [172.31.145.202] --> Running transaction check [172.31.145.202] ---> Package cri-tools.x86_64 0:1.26.1-1.amzn2.0.2 will be installed [172.31.146.232] + sudo yum versionlock add kubelet kubeadm kubectl kubernetes-cni cri-tools [172.31.145.202] --> Finished Dependency Resolution [172.31.145.202] [172.31.145.202] Dependencies Resolved [172.31.145.202] [172.31.145.202] ================================================================================ [172.31.145.202] Package Arch Version Repository Size [172.31.145.202] ================================================================================ [172.31.145.202] Installing: [172.31.145.202] kubeadm x86_64 1.27.5-150500.1.1 kubernetes 9.6 M [172.31.145.202] kubectl x86_64 1.27.5-150500.1.1 kubernetes 9.9 M [172.31.145.202] kubelet x86_64 1.27.5-150500.1.1 kubernetes 18 M [172.31.145.202] kubernetes-cni x86_64 1.2.0-150500.2.1 kubernetes 6.2 M [172.31.145.202] Installing for dependencies: [172.31.145.202] cri-tools x86_64 1.26.1-1.amzn2.0.2 amzn2-core 18 M [172.31.145.202] [172.31.145.202] Transaction Summary [172.31.145.202] ================================================================================ [172.31.145.202] Install 4 Packages (+1 Dependent package) [172.31.145.202] [172.31.145.202] Total download size: 62 M [172.31.145.202] Installed size: 316 M [172.31.145.202] Downloading packages: [172.31.145.202] warning: /var/cache/yum/x86_64/2/kubernetes/packages/kubectl-1.27.5-150500.1.1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 9a296436: NOKEY [172.31.145.202] Public key for kubectl-1.27.5-150500.1.1.x86_64.rpm is not installed [172.31.146.232] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.146.232] : versionlock [172.31.146.232] Adding versionlock on: 0:kubelet-1.27.5-150500.1.1 [172.31.146.232] Adding versionlock on: 0:kubeadm-1.27.5-150500.1.1 [172.31.146.232] Adding versionlock on: 0:kubectl-1.27.5-150500.1.1 [172.31.146.232] Adding versionlock on: 0:kubernetes-cni-1.2.0-150500.2.1 [172.31.146.232] Adding versionlock on: 0:cri-tools-1.26.1-1.amzn2.0.2 [172.31.146.232] versionlock added: 5 [172.31.146.232] + sudo systemctl daemon-reload [172.31.146.232] + sudo systemctl enable --now kubelet [172.31.146.232] Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [172.31.145.202] -------------------------------------------------------------------------------- [172.31.145.202] Total 76 MB/s | 62 MB 00:00 [172.31.145.202] Retrieving key from https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key [172.31.147.179] Installing : kubelet-1.27.5-150500.1.1.x86_64 2/5 [172.31.146.232] + sudo systemctl restart kubelet [172.31.145.202] Importing GPG key 0x9A296436: [172.31.145.202] Userid : "isv:kubernetes OBS Project " [172.31.145.202] Fingerprint: de15 b144 86cd 377b 9e87 6e1a 2346 54da 9a29 6436 [172.31.145.202] From : https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key [172.31.145.202] Running transaction check [172.31.145.202] Running transaction test [172.31.145.202] Transaction test succeeded [172.31.145.202] Running transaction [172.31.147.179] Installing : cri-tools-1.26.1-1.amzn2.0.2.x86_64 3/5 [172.31.147.179] Installing : kubectl-1.27.5-150500.1.1.x86_64 4/5 [172.31.147.179] Installing : kubeadm-1.27.5-150500.1.1.x86_64 5/5 [172.31.147.179] Verifying : kubelet-1.27.5-150500.1.1.x86_64 1/5 [172.31.147.179] Verifying : kubectl-1.27.5-150500.1.1.x86_64 2/5 [172.31.147.179] Verifying : cri-tools-1.26.1-1.amzn2.0.2.x86_64 3/5 [172.31.147.179] Verifying : kubernetes-cni-1.2.0-150500.2.1.x86_64 4/5 [172.31.147.179] Verifying : kubeadm-1.27.5-150500.1.1.x86_64 5/5 [172.31.147.179] [172.31.147.179] Installed: [172.31.147.179] kubeadm.x86_64 0:1.27.5-150500.1.1 kubectl.x86_64 0:1.27.5-150500.1.1 [172.31.147.179] kubelet.x86_64 0:1.27.5-150500.1.1 kubernetes-cni.x86_64 0:1.2.0-150500.2.1 [172.31.147.179] [172.31.147.179] Dependency Installed: [172.31.147.179] cri-tools.x86_64 0:1.26.1-1.amzn2.0.2 [172.31.147.179] [172.31.147.179] Complete! [172.31.147.179] + sudo yum versionlock add kubelet kubeadm kubectl kubernetes-cni cri-tools [172.31.147.179] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.147.179] : versionlock [172.31.147.179] Adding versionlock on: 0:kubelet-1.27.5-150500.1.1 [172.31.147.179] Adding versionlock on: 0:kubeadm-1.27.5-150500.1.1 [172.31.147.179] Adding versionlock on: 0:kubectl-1.27.5-150500.1.1 [172.31.147.179] Adding versionlock on: 0:kubernetes-cni-1.2.0-150500.2.1 [172.31.147.179] Adding versionlock on: 0:cri-tools-1.26.1-1.amzn2.0.2 [172.31.147.179] versionlock added: 5 [172.31.147.179] + sudo systemctl daemon-reload [172.31.147.179] + sudo systemctl enable --now kubelet [172.31.147.179] Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [172.31.147.179] + sudo systemctl restart kubelet [172.31.145.202] Installing : kubernetes-cni-1.2.0-150500.2.1.x86_64 1/5 [172.31.145.202] Installing : kubelet-1.27.5-150500.1.1.x86_64 2/5 [172.31.145.202] Installing : cri-tools-1.26.1-1.amzn2.0.2.x86_64 3/5 [172.31.145.202] Installing : kubectl-1.27.5-150500.1.1.x86_64 4/5 [172.31.145.202] Installing : kubeadm-1.27.5-150500.1.1.x86_64 5/5 [172.31.145.202] Verifying : kubelet-1.27.5-150500.1.1.x86_64 1/5 [172.31.145.202] Verifying : kubectl-1.27.5-150500.1.1.x86_64 2/5 [172.31.145.202] Verifying : cri-tools-1.26.1-1.amzn2.0.2.x86_64 3/5 [172.31.145.202] Verifying : kubernetes-cni-1.2.0-150500.2.1.x86_64 4/5 [172.31.145.202] Verifying : kubeadm-1.27.5-150500.1.1.x86_64 5/5 [172.31.145.202] [172.31.145.202] Installed: [172.31.145.202] kubeadm.x86_64 0:1.27.5-150500.1.1 kubectl.x86_64 0:1.27.5-150500.1.1 [172.31.145.202] kubelet.x86_64 0:1.27.5-150500.1.1 kubernetes-cni.x86_64 0:1.2.0-150500.2.1 [172.31.145.202] [172.31.145.202] Dependency Installed: [172.31.145.202] cri-tools.x86_64 0:1.26.1-1.amzn2.0.2 [172.31.145.202] [172.31.145.202] Complete! [172.31.145.202] + sudo yum versionlock add kubelet kubeadm kubectl kubernetes-cni cri-tools [172.31.145.202] Loaded plugins: extras_suggestions, langpacks, priorities, update-motd, [172.31.145.202] : versionlock [172.31.145.202] Adding versionlock on: 0:kubelet-1.27.5-150500.1.1 [172.31.145.202] Adding versionlock on: 0:kubeadm-1.27.5-150500.1.1 [172.31.145.202] Adding versionlock on: 0:kubectl-1.27.5-150500.1.1 [172.31.145.202] Adding versionlock on: 0:kubernetes-cni-1.2.0-150500.2.1 [172.31.145.202] Adding versionlock on: 0:cri-tools-1.26.1-1.amzn2.0.2 [172.31.145.202] versionlock added: 5 [172.31.145.202] + sudo systemctl daemon-reload [172.31.145.202] + sudo systemctl enable --now kubelet [172.31.145.202] Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [172.31.145.202] + sudo systemctl restart kubelet INFO[18:18:05 CEST] Generating kubeadm config file... INFO[18:18:05 CEST] Determining Kubernetes pause image... [172.31.145.202] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + cut -d : -f2 [172.31.145.202] + grep registry.k8s.io/pause [172.31.145.202] + sudo kubeadm config images list --image-repository=registry.k8s.io --kubernetes-version=1.27.5 [172.31.145.202] 3.9 INFO[18:18:06 CEST] Uploading config files... node=172.31.147.179 INFO[18:18:06 CEST] Uploading config files... node=172.31.145.202 INFO[18:18:06 CEST] Uploading config files... node=172.31.146.232 [172.31.145.202] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + sudo mkdir -p /etc/systemd/system/kubelet.service.d/ /etc/kubernetes [172.31.145.202] + sudo mv ./kubeone/cfg/cloud-config /etc/kubernetes/cloud-config [172.31.145.202] + sudo chown root:root /etc/kubernetes/cloud-config [172.31.147.179] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + sudo mkdir -p /etc/systemd/system/kubelet.service.d/ /etc/kubernetes [172.31.145.202] + sudo chmod 600 /etc/kubernetes/cloud-config [172.31.147.179] + sudo mv ./kubeone/cfg/cloud-config /etc/kubernetes/cloud-config [172.31.147.179] + sudo chown root:root /etc/kubernetes/cloud-config [172.31.147.179] + sudo chmod 600 /etc/kubernetes/cloud-config [172.31.145.202] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + sudo test -f ./kubeone/cfg/audit-policy.yaml [172.31.147.179] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + sudo test -f ./kubeone/cfg/audit-policy.yaml [172.31.146.232] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + sudo mkdir -p /etc/systemd/system/kubelet.service.d/ /etc/kubernetes [172.31.146.232] + sudo mv ./kubeone/cfg/cloud-config /etc/kubernetes/cloud-config [172.31.145.202] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + sudo test -f ./kubeone/cfg/podnodeselector.yaml [172.31.146.232] + sudo chown root:root /etc/kubernetes/cloud-config [172.31.147.179] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + sudo test -f ./kubeone/cfg/podnodeselector.yaml [172.31.146.232] + sudo chmod 600 /etc/kubernetes/cloud-config [172.31.145.202] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + sudo test -f ./kubeone/cfg/encryption-providers.yaml [172.31.147.179] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + sudo test -f ./kubeone/cfg/encryption-providers.yaml [172.31.146.232] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + sudo test -f ./kubeone/cfg/audit-policy.yaml [172.31.146.232] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + sudo test -f ./kubeone/cfg/podnodeselector.yaml [172.31.146.232] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + sudo test -f ./kubeone/cfg/encryption-providers.yaml INFO[18:18:08 CEST] Running kubeadm preflight checks... INFO[18:18:08 CEST] preflight... node=172.31.147.179 INFO[18:18:08 CEST] preflight... node=172.31.145.202 INFO[18:18:08 CEST] preflight... node=172.31.146.232 [172.31.147.179] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + sudo kubeadm init phase preflight --ignore-preflight-errors=DirAvailable--var-lib-etcd,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-6443,Port-10259,Port-10257,Port-10250,Port-2379,Port-2380 --config=./kubeone/cfg/master_2.yaml [172.31.145.202] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + sudo kubeadm init phase preflight --ignore-preflight-errors=DirAvailable--var-lib-etcd,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-6443,Port-10259,Port-10257,Port-10250,Port-2379,Port-2380 --config=./kubeone/cfg/master_0.yaml [172.31.146.232] + sudo kubeadm init phase preflight --ignore-preflight-errors=DirAvailable--var-lib-etcd,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-6443,Port-10259,Port-10257,Port-10250,Port-2379,Port-2380 --config=./kubeone/cfg/master_1.yaml [172.31.147.179] W0911 16:18:08.117230 3708 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration [172.31.147.179] [preflight] Running pre-flight checks [172.31.147.179] W0911 16:18:08.120330 3708 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.96.0.10]; the provided value is: [169.254.20.10] [172.31.145.202] W0911 16:18:08.148161 3644 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration [172.31.145.202] [preflight] Running pre-flight checks [172.31.145.202] W0911 16:18:08.152431 3644 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.96.0.10]; the provided value is: [169.254.20.10] [172.31.146.232] W0911 16:18:08.190300 3748 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration [172.31.146.232] W0911 16:18:08.194996 3748 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.96.0.10]; the provided value is: [169.254.20.10] [172.31.146.232] [preflight] Running pre-flight checks [172.31.147.179] [preflight] Pulling images required for setting up a Kubernetes cluster [172.31.147.179] [preflight] This might take a minute or two, depending on the speed of your internet connection [172.31.147.179] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [172.31.146.232] [preflight] Pulling images required for setting up a Kubernetes cluster [172.31.146.232] [preflight] This might take a minute or two, depending on the speed of your internet connection [172.31.146.232] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [172.31.145.202] [preflight] Pulling images required for setting up a Kubernetes cluster [172.31.145.202] [preflight] This might take a minute or two, depending on the speed of your internet connection [172.31.145.202] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' INFO[18:18:29 CEST] Pre-pull images node=172.31.147.179 INFO[18:18:29 CEST] Pre-pull images node=172.31.145.202 INFO[18:18:29 CEST] Pre-pull images node=172.31.146.232 [172.31.147.179] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + sudo kubeadm config images pull --config=./kubeone/cfg/master_2.yaml [172.31.145.202] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + sudo kubeadm config images pull --config=./kubeone/cfg/master_0.yaml [172.31.146.232] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + sudo kubeadm config images pull --config=./kubeone/cfg/master_1.yaml [172.31.147.179] W0911 16:18:29.271494 3892 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration [172.31.145.202] W0911 16:18:29.273589 3853 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration [172.31.146.232] W0911 16:18:29.276497 3926 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration [172.31.147.179] W0911 16:18:29.275536 3892 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.96.0.10]; the provided value is: [169.254.20.10] [172.31.145.202] W0911 16:18:29.277034 3853 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.96.0.10]; the provided value is: [169.254.20.10] [172.31.146.232] W0911 16:18:29.283067 3926 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.96.0.10]; the provided value is: [169.254.20.10] [172.31.146.232] [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.5 [172.31.145.202] [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.5 [172.31.145.202] [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.5 [172.31.146.232] [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.5 [172.31.147.179] [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.5 [172.31.145.202] [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.5 [172.31.146.232] [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.5 [172.31.147.179] [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.5 [172.31.145.202] [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.5 [172.31.147.179] [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.5 [172.31.146.232] [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.5 [172.31.147.179] [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.5 [172.31.145.202] [config/images] Pulled registry.k8s.io/pause:3.9 [172.31.147.179] [config/images] Pulled registry.k8s.io/pause:3.9 [172.31.145.202] [config/images] Pulled registry.k8s.io/etcd:3.5.7-0 [172.31.146.232] [config/images] Pulled registry.k8s.io/pause:3.9 [172.31.147.179] [config/images] Pulled registry.k8s.io/etcd:3.5.7-0 [172.31.145.202] [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1 [172.31.146.232] [config/images] Pulled registry.k8s.io/etcd:3.5.7-0 [172.31.147.179] [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1 [172.31.146.232] [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1 INFO[18:18:31 CEST] Configuring certs and etcd on control plane node... INFO[18:18:31 CEST] Ensuring Certificates... node=172.31.145.202 [172.31.145.202] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + sudo kubeadm --v=6 init phase certs all --config=./kubeone/cfg/master_0.yaml [172.31.145.202] I0911 16:18:31.217006 3896 initconfiguration.go:255] loading configuration from "./kubeone/cfg/master_0.yaml" [172.31.145.202] [certs] Using certificateDir folder "/etc/kubernetes/pki" [172.31.145.202] W0911 16:18:31.220424 3896 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration [172.31.145.202] W0911 16:18:31.223214 3896 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.96.0.10]; the provided value is: [169.254.20.10] [172.31.145.202] I0911 16:18:31.226184 3896 certs.go:112] creating a new certificate authority for ca [172.31.145.202] [certs] Generating "ca" certificate and key [172.31.145.202] I0911 16:18:31.286786 3896 certs.go:519] validating certificate period for ca certificate [172.31.145.202] [certs] Generating "apiserver" certificate and key [172.31.145.202] [certs] apiserver serving cert is signed for DNS names [ip-172-31-145-202.eu-central-1.compute.internal kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.145.202] [172.31.145.202] [certs] Generating "apiserver-kubelet-client" certificate and key [172.31.145.202] I0911 16:18:31.686558 3896 certs.go:112] creating a new certificate authority for front-proxy-ca [172.31.145.202] [certs] Generating "front-proxy-ca" certificate and key [172.31.145.202] I0911 16:18:31.917329 3896 certs.go:519] validating certificate period for front-proxy-ca certificate [172.31.145.202] [certs] Generating "front-proxy-client" certificate and key [172.31.145.202] I0911 16:18:32.124151 3896 certs.go:112] creating a new certificate authority for etcd-ca [172.31.145.202] I0911 16:18:32.417796 3896 certs.go:519] validating certificate period for etcd/ca certificate [172.31.145.202] [certs] Generating "etcd/ca" certificate and key [172.31.145.202] [certs] Generating "etcd/server" certificate and key [172.31.145.202] [certs] etcd/server serving cert is signed for DNS names [ip-172-31-145-202.eu-central-1.compute.internal localhost] and IPs [172.31.145.202 127.0.0.1 ::1] [172.31.145.202] [certs] Generating "etcd/peer" certificate and key [172.31.145.202] [certs] etcd/peer serving cert is signed for DNS names [ip-172-31-145-202.eu-central-1.compute.internal localhost] and IPs [172.31.145.202 127.0.0.1 ::1] [172.31.145.202] [certs] Generating "etcd/healthcheck-client" certificate and key [172.31.145.202] [certs] Generating "apiserver-etcd-client" certificate and key [172.31.145.202] I0911 16:18:33.352822 3896 certs.go:78] creating new public/private key files for signing service account users [172.31.145.202] [certs] Generating "sa" key and public key [172.31.145.202] + sudo find /etc/kubernetes/pki/ -name '*.crt' -exec chmod 600 '{}' ';' INFO[18:18:33 CEST] Downloading PKI... INFO[18:18:34 CEST] Creating local backup... node=172.31.145.202 INFO[18:18:34 CEST] Uploading PKI... INFO[18:18:36 CEST] Configuring certs and etcd on consecutive control plane node... INFO[18:18:36 CEST] Ensuring Certificates... node=172.31.147.179 INFO[18:18:36 CEST] Ensuring Certificates... node=172.31.146.232 [172.31.146.232] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.146.232] + sudo kubeadm --v=6 init phase certs all --config=./kubeone/cfg/master_1.yaml [172.31.147.179] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.147.179] + sudo kubeadm --v=6 init phase certs all --config=./kubeone/cfg/master_2.yaml [172.31.147.179] I0911 16:18:36.440026 4259 initconfiguration.go:255] loading configuration from "./kubeone/cfg/master_2.yaml" [172.31.146.232] I0911 16:18:36.449470 4337 initconfiguration.go:255] loading configuration from "./kubeone/cfg/master_1.yaml" [172.31.147.179] [certs] Using certificateDir folder "/etc/kubernetes/pki" [172.31.147.179] [certs] Using existing ca certificate authority [172.31.147.179] W0911 16:18:36.441746 4259 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration [172.31.147.179] W0911 16:18:36.447139 4259 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.96.0.10]; the provided value is: [169.254.20.10] [172.31.147.179] I0911 16:18:36.449917 4259 certs.go:519] validating certificate period for CA certificate [172.31.147.179] I0911 16:18:36.449988 4259 certs.go:519] validating certificate period for front-proxy CA certificate [172.31.147.179] I0911 16:18:36.450065 4259 certs.go:519] validating certificate period for ca certificate [172.31.146.232] W0911 16:18:36.453139 4337 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration [172.31.146.232] W0911 16:18:36.457111 4337 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.96.0.10]; the provided value is: [169.254.20.10] [172.31.146.232] I0911 16:18:36.460820 4337 certs.go:519] validating certificate period for CA certificate [172.31.146.232] I0911 16:18:36.460897 4337 certs.go:519] validating certificate period for front-proxy CA certificate [172.31.146.232] I0911 16:18:36.460968 4337 certs.go:519] validating certificate period for ca certificate [172.31.146.232] [certs] Using certificateDir folder "/etc/kubernetes/pki" [172.31.146.232] [certs] Using existing ca certificate authority [172.31.146.232] [certs] Generating "apiserver" certificate and key [172.31.147.179] [certs] Generating "apiserver" certificate and key [172.31.146.232] [certs] apiserver serving cert is signed for DNS names [ip-172-31-146-232.eu-central-1.compute.internal kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.146.232] [172.31.147.179] [certs] apiserver serving cert is signed for DNS names [ip-172-31-147-179.eu-central-1.compute.internal kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.147.179] [172.31.147.179] [certs] Generating "apiserver-kubelet-client" certificate and key [172.31.147.179] [certs] Using existing front-proxy-ca certificate authority [172.31.147.179] I0911 16:18:36.761829 4259 certs.go:519] validating certificate period for front-proxy-ca certificate [172.31.147.179] [certs] Generating "front-proxy-client" certificate and key [172.31.147.179] [certs] Using existing etcd/ca certificate authority [172.31.147.179] I0911 16:18:36.944777 4259 certs.go:519] validating certificate period for etcd/ca certificate [172.31.146.232] [certs] Generating "apiserver-kubelet-client" certificate and key [172.31.146.232] [certs] Using existing front-proxy-ca certificate authority [172.31.146.232] I0911 16:18:37.034556 4337 certs.go:519] validating certificate period for front-proxy-ca certificate [172.31.147.179] [certs] Generating "etcd/server" certificate and key [172.31.147.179] [certs] etcd/server serving cert is signed for DNS names [ip-172-31-147-179.eu-central-1.compute.internal localhost] and IPs [172.31.147.179 127.0.0.1 ::1] [172.31.146.232] [certs] Generating "front-proxy-client" certificate and key [172.31.146.232] [certs] Using existing etcd/ca certificate authority [172.31.146.232] I0911 16:18:37.195448 4337 certs.go:519] validating certificate period for etcd/ca certificate [172.31.146.232] [certs] Generating "etcd/server" certificate and key [172.31.147.179] [certs] Generating "etcd/peer" certificate and key [172.31.147.179] [certs] etcd/peer serving cert is signed for DNS names [ip-172-31-147-179.eu-central-1.compute.internal localhost] and IPs [172.31.147.179 127.0.0.1 ::1] [172.31.146.232] [certs] etcd/server serving cert is signed for DNS names [ip-172-31-146-232.eu-central-1.compute.internal localhost] and IPs [172.31.146.232 127.0.0.1 ::1] [172.31.146.232] [certs] Generating "etcd/peer" certificate and key [172.31.146.232] [certs] etcd/peer serving cert is signed for DNS names [ip-172-31-146-232.eu-central-1.compute.internal localhost] and IPs [172.31.146.232 127.0.0.1 ::1] [172.31.147.179] [certs] Generating "etcd/healthcheck-client" certificate and key [172.31.146.232] [certs] Generating "etcd/healthcheck-client" certificate and key [172.31.147.179] [certs] Generating "apiserver-etcd-client" certificate and key [172.31.147.179] [certs] Using the existing "sa" key [172.31.147.179] I0911 16:18:37.787807 4259 certs.go:78] creating new public/private key files for signing service account users [172.31.147.179] + sudo find /etc/kubernetes/pki/ -name '*.crt' -exec chmod 600 '{}' ';' [172.31.146.232] [certs] Generating "apiserver-etcd-client" certificate and key [172.31.146.232] [certs] Using the existing "sa" key [172.31.146.232] I0911 16:18:37.829849 4337 certs.go:78] creating new public/private key files for signing service account users [172.31.146.232] + sudo find /etc/kubernetes/pki/ -name '*.crt' -exec chmod 600 '{}' ';' INFO[18:18:37 CEST] Initializing Kubernetes on leader... INFO[18:18:37 CEST] Running kubeadm... node=172.31.145.202 [172.31.145.202] + export PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + PATH=/usr/local/bin:/usr/bin:/sbin:/usr/local/bin:/opt/bin [172.31.145.202] + [[ -f /etc/kubernetes/admin.conf ]] [172.31.145.202] + sudo kubeadm --v=6 init --config=./kubeone/cfg/master_0.yaml [172.31.145.202] I0911 16:18:37.947315 4047 initconfiguration.go:255] loading configuration from "./kubeone/cfg/master_0.yaml" [172.31.145.202] [init] Using Kubernetes version: v1.27.5 [172.31.145.202] [preflight] Running pre-flight checks [172.31.145.202] W0911 16:18:37.949557 4047 initconfiguration.go:332] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration [172.31.145.202] W0911 16:18:37.955418 4047 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.96.0.10]; the provided value is: [169.254.20.10] [172.31.145.202] I0911 16:18:37.958766 4047 certs.go:519] validating certificate period for CA certificate [172.31.145.202] I0911 16:18:37.958845 4047 certs.go:519] validating certificate period for front-proxy CA certificate [172.31.145.202] I0911 16:18:37.959010 4047 checks.go:563] validating Kubernetes and kubeadm version [172.31.145.202] I0911 16:18:37.959036 4047 checks.go:168] validating if the firewall is enabled and active [172.31.145.202] I0911 16:18:37.965111 4047 checks.go:203] validating availability of port 6443 [172.31.145.202] I0911 16:18:37.965256 4047 checks.go:203] validating availability of port 10259 [172.31.145.202] I0911 16:18:37.965283 4047 checks.go:203] validating availability of port 10257 [172.31.145.202] I0911 16:18:37.965322 4047 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml [172.31.145.202] I0911 16:18:37.965340 4047 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml [172.31.145.202] I0911 16:18:37.965351 4047 checks.go:280] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml [172.31.145.202] I0911 16:18:37.965359 4047 checks.go:280] validating the existence of file /etc/kubernetes/manifests/etcd.yaml [172.31.145.202] I0911 16:18:37.965374 4047 checks.go:430] validating if the connectivity type is via proxy or direct [172.31.145.202] I0911 16:18:37.965559 4047 checks.go:469] validating http connectivity to first IP address in the CIDR [172.31.145.202] I0911 16:18:37.965591 4047 checks.go:469] validating http connectivity to first IP address in the CIDR [172.31.145.202] I0911 16:18:37.965617 4047 checks.go:104] validating the container runtime [172.31.145.202] I0911 16:18:37.990342 4047 checks.go:639] validating whether swap is enabled or not [172.31.145.202] I0911 16:18:37.990400 4047 checks.go:370] validating the presence of executable crictl [172.31.145.202] I0911 16:18:37.990428 4047 checks.go:370] validating the presence of executable conntrack [172.31.145.202] I0911 16:18:37.990440 4047 checks.go:370] validating the presence of executable ip [172.31.145.202] I0911 16:18:37.990448 4047 checks.go:370] validating the presence of executable iptables [172.31.145.202] I0911 16:18:37.990460 4047 checks.go:370] validating the presence of executable mount [172.31.145.202] I0911 16:18:37.990469 4047 checks.go:370] validating the presence of executable nsenter [172.31.145.202] I0911 16:18:37.990478 4047 checks.go:370] validating the presence of executable ebtables [172.31.145.202] I0911 16:18:37.990489 4047 checks.go:370] validating the presence of executable ethtool [172.31.145.202] I0911 16:18:37.990501 4047 checks.go:370] validating the presence of executable socat [172.31.145.202] I0911 16:18:37.990513 4047 checks.go:370] validating the presence of executable tc [172.31.145.202] I0911 16:18:37.990521 4047 checks.go:370] validating the presence of executable touch [172.31.145.202] I0911 16:18:37.990531 4047 checks.go:516] running all checks [172.31.145.202] I0911 16:18:37.997383 4047 checks.go:401] checking whether the given node name is valid and reachable using net.LookupHost [172.31.145.202] I0911 16:18:37.998026 4047 checks.go:605] validating kubelet version [172.31.145.202] I0911 16:18:38.059089 4047 checks.go:130] validating if the "kubelet" service is enabled and active [172.31.145.202] [preflight] Pulling images required for setting up a Kubernetes cluster [172.31.145.202] [preflight] This might take a minute or two, depending on the speed of your internet connection [172.31.145.202] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [172.31.145.202] I0911 16:18:38.067682 4047 checks.go:203] validating availability of port 10250 [172.31.145.202] I0911 16:18:38.067902 4047 checks.go:329] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables [172.31.145.202] I0911 16:18:38.067966 4047 checks.go:329] validating the contents of file /proc/sys/net/ipv4/ip_forward [172.31.145.202] I0911 16:18:38.067996 4047 checks.go:203] validating availability of port 2379 [172.31.145.202] I0911 16:18:38.068034 4047 checks.go:203] validating availability of port 2380 [172.31.145.202] I0911 16:18:38.068075 4047 checks.go:243] validating the existence and emptiness of directory /var/lib/etcd [172.31.145.202] I0911 16:18:38.068171 4047 checks.go:828] using image pull policy: IfNotPresent [172.31.145.202] I0911 16:18:38.096183 4047 checks.go:846] image exists: registry.k8s.io/kube-apiserver:v1.27.5 [172.31.145.202] I0911 16:18:38.122727 4047 checks.go:846] image exists: registry.k8s.io/kube-controller-manager:v1.27.5 [172.31.145.202] I0911 16:18:38.149605 4047 checks.go:846] image exists: registry.k8s.io/kube-scheduler:v1.27.5 [172.31.145.202] I0911 16:18:38.175913 4047 checks.go:846] image exists: registry.k8s.io/kube-proxy:v1.27.5 [172.31.145.202] I0911 16:18:38.245627 4047 checks.go:846] image exists: registry.k8s.io/pause:3.9 [172.31.145.202] I0911 16:18:38.283336 4047 checks.go:846] image exists: registry.k8s.io/etcd:3.5.7-0 [172.31.145.202] [certs] Using certificateDir folder "/etc/kubernetes/pki" [172.31.145.202] I0911 16:18:38.311952 4047 checks.go:846] image exists: registry.k8s.io/coredns/coredns:v1.10.1 [172.31.145.202] [certs] Using existing ca certificate authority [172.31.145.202] [certs] Using existing apiserver certificate and key on disk [172.31.145.202] [certs] Using existing apiserver-kubelet-client certificate and key on disk [172.31.145.202] [certs] Using existing front-proxy-ca certificate authority [172.31.145.202] I0911 16:18:38.312176 4047 certs.go:519] validating certificate period for ca certificate [172.31.145.202] I0911 16:18:38.312717 4047 certs.go:519] validating certificate period for apiserver certificate [172.31.145.202] I0911 16:18:38.313176 4047 certs.go:519] validating certificate period for apiserver-kubelet-client certificate [172.31.145.202] I0911 16:18:38.313702 4047 certs.go:519] validating certificate period for front-proxy-ca certificate [172.31.145.202] [certs] Using existing front-proxy-client certificate and key on disk [172.31.145.202] [certs] Using existing etcd/ca certificate authority [172.31.145.202] [certs] Using existing etcd/server certificate and key on disk [172.31.145.202] [certs] Using existing etcd/peer certificate and key on disk [172.31.145.202] I0911 16:18:38.314254 4047 certs.go:519] validating certificate period for front-proxy-client certificate [172.31.145.202] I0911 16:18:38.314779 4047 certs.go:519] validating certificate period for etcd/ca certificate [172.31.145.202] I0911 16:18:38.315263 4047 certs.go:519] validating certificate period for etcd/server certificate [172.31.145.202] I0911 16:18:38.315820 4047 certs.go:519] validating certificate period for etcd/peer certificate [172.31.145.202] [certs] Using existing etcd/healthcheck-client certificate and key on disk [172.31.145.202] [certs] Using existing apiserver-etcd-client certificate and key on disk [172.31.145.202] [certs] Using the existing "sa" key [172.31.145.202] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [172.31.145.202] I0911 16:18:38.316354 4047 certs.go:519] validating certificate period for etcd/healthcheck-client certificate [172.31.145.202] I0911 16:18:38.316786 4047 certs.go:519] validating certificate period for apiserver-etcd-client certificate [172.31.145.202] I0911 16:18:38.317222 4047 certs.go:78] creating new public/private key files for signing service account users [172.31.145.202] I0911 16:18:38.317606 4047 kubeconfig.go:103] creating kubeconfig file for admin.conf [172.31.145.202] [kubeconfig] Writing "admin.conf" kubeconfig file [172.31.145.202] I0911 16:18:38.559057 4047 kubeconfig.go:103] creating kubeconfig file for kubelet.conf [172.31.145.202] [kubeconfig] Writing "kubelet.conf" kubeconfig file [172.31.145.202] I0911 16:18:38.720820 4047 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf [172.31.145.202] [kubeconfig] Writing "controller-manager.conf" kubeconfig file [172.31.145.202] I0911 16:18:38.905286 4047 kubeconfig.go:103] creating kubeconfig file for scheduler.conf [172.31.145.202] [kubeconfig] Writing "scheduler.conf" kubeconfig file [172.31.145.202] I0911 16:18:39.230296 4047 kubelet.go:67] Stopping the kubelet [172.31.145.202] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [172.31.145.202] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [172.31.145.202] [kubelet-start] Starting the kubelet [172.31.145.202] I0911 16:18:39.393362 4047 manifests.go:99] [control-plane] getting StaticPodSpecs [172.31.145.202] [control-plane] Using manifest folder "/etc/kubernetes/manifests" [172.31.145.202] [control-plane] Creating static Pod manifest for "kube-apiserver" [172.31.145.202] [control-plane] Creating static Pod manifest for "kube-controller-manager" [172.31.145.202] I0911 16:18:39.393748 4047 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver" [172.31.145.202] I0911 16:18:39.393766 4047 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver" [172.31.145.202] I0911 16:18:39.393773 4047 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" [172.31.145.202] I0911 16:18:39.399023 4047 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" [172.31.145.202] I0911 16:18:39.399052 4047 manifests.go:99] [control-plane] getting StaticPodSpecs [172.31.145.202] I0911 16:18:39.399337 4047 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" [172.31.145.202] I0911 16:18:39.399352 4047 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager" [172.31.145.202] [control-plane] Creating static Pod manifest for "kube-scheduler" [172.31.145.202] I0911 16:18:39.399359 4047 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" [172.31.145.202] I0911 16:18:39.399366 4047 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" [172.31.145.202] I0911 16:18:39.399373 4047 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" [172.31.145.202] I0911 16:18:39.400471 4047 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [172.31.145.202] I0911 16:18:39.400495 4047 manifests.go:99] [control-plane] getting StaticPodSpecs [172.31.145.202] I0911 16:18:39.400921 4047 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" [172.31.145.202] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [172.31.145.202] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [172.31.145.202] I0911 16:18:39.401625 4047 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" [172.31.145.202] I0911 16:18:39.403352 4047 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" [172.31.145.202] I0911 16:18:39.403370 4047 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy [172.31.145.202] I0911 16:18:39.404191 4047 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf [172.31.145.202] I0911 16:18:39.413207 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 7 milliseconds [172.31.145.202] I0911 16:18:40.413399 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:40.416024 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:41.416409 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:41.419494 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 3 milliseconds [172.31.145.202] I0911 16:18:42.419634 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 3 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:42.422050 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:43.422683 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 4 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:43.425256 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:44.425397 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 5 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:44.427563 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:45.428381 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 6 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:45.430522 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:46.431303 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 7 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:46.433889 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:47.434182 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 8 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:47.436387 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:48.436676 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 9 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:48.439685 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:49.943053 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:50.943619 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:50.949352 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 5 milliseconds [172.31.145.202] I0911 16:18:51.950465 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:51.952681 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:52.953332 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 3 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:52.955934 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:53.956473 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 4 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:53.960284 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 3 milliseconds [172.31.145.202] I0911 16:18:54.960952 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 5 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:54.963484 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:55.964454 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 6 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:55.967680 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 3 milliseconds [172.31.145.202] I0911 16:18:56.968407 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 7 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:56.971011 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:57.972215 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 8 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:57.975049 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:18:58.976088 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 9 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:18:58.978805 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:00.442841 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:01.443492 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:01.445812 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:02.446878 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:02.449214 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:03.449605 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 3 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:03.452257 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:04.452375 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 4 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:04.454974 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:05.455401 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 5 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:05.458495 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:06.459653 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 6 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:06.464622 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 4 milliseconds [172.31.145.202] I0911 16:19:07.465245 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 7 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:07.467322 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 1 milliseconds [172.31.145.202] I0911 16:19:08.467750 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 8 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:08.469899 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:09.470348 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 9 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:09.472962 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:10.942377 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:11.942844 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:11.945307 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:12.945771 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:12.950628 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 4 milliseconds [172.31.145.202] I0911 16:19:13.951107 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 3 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:13.953214 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:14.953701 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 4 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:14.956797 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 3 milliseconds [172.31.145.202] I0911 16:19:15.957880 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 5 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:15.963353 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 5 milliseconds [172.31.145.202] I0911 16:19:16.963630 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 6 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:16.966448 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:17.966965 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 7 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:17.972707 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 5 milliseconds [172.31.145.202] I0911 16:19:18.973220 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 8 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:18.975917 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] [kubelet-check] Initial timeout of 40s passed. [172.31.145.202] I0911 16:19:19.977067 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 9 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:19.979231 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:21.444762 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 3 milliseconds [172.31.145.202] I0911 16:19:22.445229 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:22.447574 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:23.448108 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:23.450718 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:24.451236 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 3 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:24.454032 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:25.454442 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 4 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:25.457415 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:26.457992 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 5 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:26.460641 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:27.461111 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 6 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:27.464130 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:28.465119 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 7 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:28.467705 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] I0911 16:19:29.468745 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 8 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:29.481987 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 13 milliseconds [172.31.145.202] I0911 16:19:30.482420 4047 with_retry.go:234] Got a Retry-After 1s response for attempt 9 to https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s [172.31.145.202] I0911 16:19:30.484595 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s in 2 milliseconds [172.31.145.202] [apiclient] All control plane components are healthy after 52.550745 seconds [172.31.145.202] I0911 16:19:31.956321 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/healthz?timeout=10s 200 OK in 15 milliseconds [172.31.145.202] I0911 16:19:31.956411 4047 uploadconfig.go:112] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap [172.31.145.202] I0911 16:19:31.965480 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 8 milliseconds [172.31.145.202] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [172.31.145.202] I0911 16:19:31.975320 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 9 milliseconds [172.31.145.202] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [172.31.145.202] I0911 16:19:31.989446 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 13 milliseconds [172.31.145.202] I0911 16:19:31.989683 4047 uploadconfig.go:126] [upload-config] Uploading the kubelet component config to a ConfigMap [172.31.145.202] I0911 16:19:31.999329 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 8 milliseconds [172.31.145.202] I0911 16:19:32.004380 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 4 milliseconds [172.31.145.202] I0911 16:19:32.009446 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 4 milliseconds [172.31.145.202] I0911 16:19:32.009608 4047 uploadconfig.go:131] [upload-config] Preserving the CRISocket information for the control-plane node [172.31.145.202] I0911 16:19:32.009624 4047 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "ip-172-31-145-202.eu-central-1.compute.internal" as an annotation [172.31.145.202] I0911 16:19:32.513414 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/nodes/ip-172-31-145-202.eu-central-1.compute.internal?timeout=10s 404 Not Found in 2 milliseconds [172.31.145.202] I0911 16:19:33.015842 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/nodes/ip-172-31-145-202.eu-central-1.compute.internal?timeout=10s 404 Not Found in 5 milliseconds [172.31.145.202] I0911 16:19:33.514336 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/nodes/ip-172-31-145-202.eu-central-1.compute.internal?timeout=10s 404 Not Found in 3 milliseconds [172.31.145.202] I0911 16:19:34.012668 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/nodes/ip-172-31-145-202.eu-central-1.compute.internal?timeout=10s 404 Not Found in 2 milliseconds [172.31.145.202] I0911 16:19:34.513107 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/nodes/ip-172-31-145-202.eu-central-1.compute.internal?timeout=10s 200 OK in 3 milliseconds [172.31.145.202] I0911 16:19:34.522336 4047 round_trippers.go:553] PATCH https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/nodes/ip-172-31-145-202.eu-central-1.compute.internal?timeout=10s 200 OK in 7 milliseconds [172.31.145.202] [upload-certs] Skipping phase. Please see --upload-certs [172.31.145.202] [mark-control-plane] Marking the node ip-172-31-145-202.eu-central-1.compute.internal as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [172.31.145.202] [mark-control-plane] Marking the node ip-172-31-145-202.eu-central-1.compute.internal as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule] [172.31.145.202] I0911 16:19:35.033404 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/nodes/ip-172-31-145-202.eu-central-1.compute.internal?timeout=10s 200 OK in 9 milliseconds [172.31.145.202] I0911 16:19:35.056704 4047 round_trippers.go:553] PATCH https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/nodes/ip-172-31-145-202.eu-central-1.compute.internal?timeout=10s 200 OK in 19 milliseconds [172.31.145.202] I0911 16:19:35.066533 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-edcie7?timeout=10s 404 Not Found in 4 milliseconds [172.31.145.202] [bootstrap-token] Using token: edcie7.hqm495c7j47iy6h7 [172.31.145.202] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [172.31.145.202] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [172.31.145.202] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [172.31.145.202] I0911 16:19:35.075601 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/secrets?timeout=10s 201 Created in 8 milliseconds [172.31.145.202] I0911 16:19:35.080808 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 4 milliseconds [172.31.145.202] I0911 16:19:35.086153 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 4 milliseconds [172.31.145.202] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [172.31.145.202] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [172.31.145.202] I0911 16:19:35.091012 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 4 milliseconds [172.31.145.202] I0911 16:19:35.099015 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 7 milliseconds [172.31.145.202] I0911 16:19:35.103792 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 4 milliseconds [172.31.145.202] I0911 16:19:35.103989 4047 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig [172.31.145.202] I0911 16:19:35.104490 4047 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf [172.31.145.202] I0911 16:19:35.104505 4047 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig [172.31.145.202] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [172.31.145.202] I0911 16:19:35.104755 4047 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace [172.31.145.202] I0911 16:19:35.111437 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-public/configmaps?timeout=10s 201 Created in 6 milliseconds [172.31.145.202] I0911 16:19:35.111621 4047 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace [172.31.145.202] I0911 16:19:35.116659 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s 201 Created in 4 milliseconds [172.31.145.202] I0911 16:19:35.126918 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s 201 Created in 10 milliseconds [172.31.145.202] I0911 16:19:35.127098 4047 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem" [172.31.145.202] I0911 16:19:35.127669 4047 loader.go:373] Config loaded from file: /etc/kubernetes/kubelet.conf [172.31.145.202] I0911 16:19:35.128327 4047 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation [172.31.145.202] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [172.31.145.202] I0911 16:19:35.307819 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns 200 OK in 8 milliseconds [172.31.145.202] I0911 16:19:35.317253 4047 round_trippers.go:553] GET https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s 404 Not Found in 4 milliseconds [172.31.145.202] I0911 16:19:35.324896 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 7 milliseconds [172.31.145.202] I0911 16:19:35.333174 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 7 milliseconds [172.31.145.202] I0911 16:19:35.343855 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 9 milliseconds [172.31.145.202] I0911 16:19:35.354512 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 7 milliseconds [172.31.145.202] I0911 16:19:35.391996 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/apps/v1/namespaces/kube-system/deployments?timeout=10s 201 Created in 35 milliseconds [172.31.145.202] I0911 16:19:35.413387 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/services?timeout=10s 201 Created in 19 milliseconds [172.31.145.202] [addons] Applied essential addon: CoreDNS [172.31.145.202] I0911 16:19:35.421307 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 6 milliseconds [172.31.145.202] I0911 16:19:35.445965 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/apps/v1/namespaces/kube-system/daemonsets?timeout=10s 201 Created in 23 milliseconds [172.31.145.202] I0911 16:19:35.469238 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 22 milliseconds [172.31.145.202] I0911 16:19:35.500623 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 31 milliseconds [172.31.145.202] I0911 16:19:35.557950 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 57 milliseconds [172.31.145.202] I0911 16:19:35.573908 4047 round_trippers.go:553] POST https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 15 milliseconds [172.31.145.202] I0911 16:19:35.583358 4047 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf [172.31.145.202] I0911 16:19:35.584016 4047 loader.go:373] Config loaded from file: /etc/kubernetes/admin.conf [172.31.145.202] + sudo find /etc/kubernetes/pki/ -name '*.crt' -exec chmod 600 '{}' ';' [172.31.145.202] [addons] Applied essential addon: kube-proxy [172.31.145.202] [172.31.145.202] Your Kubernetes control-plane has initialized successfully! [172.31.145.202] [172.31.145.202] To start using your cluster, you need to run the following as a regular user: [172.31.145.202] [172.31.145.202] mkdir -p $HOME/.kube [172.31.145.202] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [172.31.145.202] sudo chown $(id -u):$(id -g) $HOME/.kube/config [172.31.145.202] [172.31.145.202] Alternatively, if you are the root user, you can run: [172.31.145.202] [172.31.145.202] export KUBECONFIG=/etc/kubernetes/admin.conf [172.31.145.202] [172.31.145.202] You should now deploy a pod network to the cluster. [172.31.145.202] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: [172.31.145.202] https://kubernetes.io/docs/concepts/cluster-administration/addons/ [172.31.145.202] [172.31.145.202] You can now join any number of control-plane nodes by copying certificate authorities [172.31.145.202] and service account keys on each node and then running the following as root: [172.31.145.202] [172.31.145.202] kubeadm join kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443 --token edcie7.hqm495c7j47iy6h7 \ [172.31.145.202] --discovery-token-ca-cert-hash sha256:ea6ec5ab51012898c32d2f8a50cdaeeba68f37d82d5e7da4ce9ac3f4df63e1db \ [172.31.145.202] --control-plane [172.31.145.202] [172.31.145.202] Then you can join any number of worker nodes by running the following on each as root: [172.31.145.202] [172.31.145.202] kubeadm join kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443 --token edcie7.hqm495c7j47iy6h7 \ [172.31.145.202] --discovery-token-ca-cert-hash sha256:ea6ec5ab51012898c32d2f8a50cdaeeba68f37d82d5e7da4ce9ac3f4df63e1db INFO[18:19:35 CEST] Building Kubernetes clientset... WARN[18:19:35 CEST] Task failed, error was: kubernetes: building dynamic kubernetes client Get "https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api?timeout=32s": proxyconnect tcp: ssh: tunneling connection to: 127.0.0.1:8118 ssh: rejected: connect failed (Connection refused) WARN[18:19:45 CEST] Retrying task... INFO[18:19:45 CEST] Building Kubernetes clientset... WARN[18:19:46 CEST] Task failed, error was: kubernetes: building dynamic kubernetes client Get "https://kubeone-cluster-api-lb-1169552687.eu-central-1.elb.amazonaws.com:6443/api?timeout=32s": proxyconnect tcp: ssh: tunneling connection to: 127.0.0.1:8118 ssh: rejected: connect failed (Connection refused) ```
xmudrii commented 1 year ago

That's a strange failure, but it might be related to your AWS configuration. Please take the following steps:

Such the error indicates that KubeOne cannot reach the Kubernetes API server which is required after the first control plane node is provisioned.

ghost commented 1 year ago

OK, for some reason my local privoxy got in the way (at 127.0.0.1:8118).

I could perfectly reach the ELB directly with curl:

``` $ curl --insecure -v 'https://kubeone-cluster-api-lb-12345678.eu-central-1.elb.amazonaws.com:6443/api?timeout=32s' * processing: https://kubeone-cluster-api-lb-12345678.eu-central-1.elb.amazonaws.com:6443/api?timeout=32s * Uses proxy env variable no_proxy == 'localhost,127.0.0.1,kane,.local,.enote.com,.enote.net,192.168.0.0/16,10.0.0/8,172.17.0.0/16,172.20.0.0/16,172.27.0.0/16' * Uses proxy env variable https_proxy == 'http://127.0.0.1:8118/' * Trying 127.0.0.1:8118... * Connected to 127.0.0.1 (127.0.0.1) port 8118 * CONNECT tunnel: HTTP/1.1 negotiated * allocate connect buffer * Establish HTTP proxy tunnel to kubeone-cluster-api-lb-12345678.eu-central-1.elb.amazonaws.com:6443 > CONNECT kubeone-cluster-api-lb-12345678.eu-central-1.elb.amazonaws.com:6443 HTTP/1.1 > Host: kubeone-cluster-api-lb-12345678.eu-central-1.elb.amazonaws.com:6443 > User-Agent: curl/8.2.1 > Proxy-Connection: Keep-Alive > < HTTP/1.1 200 Connection established < * CONNECT phase completed * CONNECT tunnel established, response 200 * ALPN: offers h2,http/1.1 * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Request CERT (13): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Certificate (11): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 * ALPN: server accepted h2 * Server certificate: * subject: CN=kube-apiserver * start date: Sep 12 09:10:13 2023 GMT * expire date: Sep 11 09:15:13 2024 GMT * issuer: CN=kubernetes * SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway. * using HTTP/2 * h2 [:method: GET] * h2 [:scheme: https] * h2 [:authority: kubeone-cluster-api-lb-12345678.eu-central-1.elb.amazonaws.com:6443] * h2 [:path: /api?timeout=32s] * h2 [user-agent: curl/8.2.1] * h2 [accept: */*] * Using Stream ID: 1 > GET /api?timeout=32s HTTP/2 > Host: kubeone-cluster-api-lb-12345678.eu-central-1.elb.amazonaws.com:6443 > User-Agent: curl/8.2.1 > Accept: */* > * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): < HTTP/2 403 < audit-id: f23ffaa8-b68b-420b-8be4-fca31654d575 < cache-control: no-cache, private < content-type: application/json < x-content-type-options: nosniff < x-kubernetes-pf-flowschema-uid: b371b157-91a3-4374-b3b9-8d150629e32c < x-kubernetes-pf-prioritylevel-uid: 6121b2d0-9f7a-4c9b-a951-cda81c842083 < content-length: 220 < date: Tue, 12 Sep 2023 09:21:03 GMT < { "kind": "Status", "apiVersion": "v1", "metadata": {}, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/api\"", "reason": "Forbidden", "details": {}, "code": 403 } * Connection #0 to host 127.0.0.1 left intact ```

Does KubeOne try to use the HTTP proxy to access URLs through a SSH tunnel?

Get "...": proxyconnect tcp: ssh: tunneling  <<< SSH tunnel?
connection to: 127.0.0.1:8118  <<< my HTTP proxy
ssh: rejected: connect failed (Connection refused)

Btw. after unsetting the http_proxy and https_proxy env variables, it seems to be working.

xmudrii commented 1 year ago

Does KubeOne try to use the HTTP proxy to access URLs through a SSH tunnel?

Yes, we use a SSH tunnel to access the API server. That's done because the API endpoint might not be reachable publicly, so we default to using a SSH tunnel via a bastion/jump host. There's no way to disable this though. I'm not sure if you can somehow configure your proxy to allow this behavior instead.

ghost commented 1 year ago

Yes, we use a SSH tunnel to access the API server.

OK, so if I get this correct then the following seems to happen:

  1. SSH tunnel created to bastion host
  2. SSH tunnel passes the proxy variables (implicitly?) to the bastion for that session
  3. HTTP request executed at the bastion tries to use the HTTP proxy
  4. failure since the HTTP proxy is not available at the bastion host

If this assumption is correct, then the SSH tunnel needs to make sure that those standard variables (they are case-insensitive) are not getting passed to SSH.

That's done because the API endpoint might not be reachable publicly, [...]

Oh, how do I actually configure it to go all private? And why isn't it the default? I couldn't find anything in the docs and just saw that all the EC2 machines get public IPs.

xmudrii commented 1 year ago

If this assumption is correct, then the SSH tunnel needs to make sure that those standard variables (they are case-insensitive) are not getting passed to SSH.

I'll look into how this exactly works and get back to you.

Oh, how do I actually configure it to go all private? And why isn't it the default? I couldn't find anything in the docs and just saw that all the EC2 machines get public IPs.

There are two ways to do that:

kubermatic-bot commented 11 months ago

Issues go stale after 90d of inactivity. After a furter 30 days, they will turn rotten. Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

xmudrii commented 10 months ago

We still need to look into the SSH issue /remove-lifecycle stale

kubermatic-bot commented 5 months ago

Issues go stale after 90d of inactivity. After a furter 30 days, they will turn rotten. Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

xmudrii commented 5 months ago

/remove-lifecycle stale

xmudrii commented 4 months ago

We need to retitle and update the description of this issue to get https://github.com/kubermatic/kubeone/issues/2904#issuecomment-1717190442 fixed

kron4eg commented 4 months ago

KubeOne itself doesn't support local proxying, since we already tunneling http connections over the SSH. Unfortunately net/http.Client used by the kubernetes client lib picks up your local HTTPS_PROXY and uses it. I'm not sure if it's even possible to tunnel via the tunnel.

For normal access to the kube-apiserver please use kubeone proxy. It established a direct tunnel and opens a local HTTPS proxy that one can use on the next terminal

export HTTPS_PROXY=http://localhost:8080
kubectl get node