vmware-tanzu / community-edition

VMware Tanzu Community Edition is no longer an actively maintained project. Code is available for historical purposes only.
https://tanzucommunityedition.io/
Apache License 2.0
1.34k stars 308 forks source link

Docker standalone cluster recreating new worker nodes repeatedly #1500

Closed karuppiah7890 closed 3 years ago

karuppiah7890 commented 3 years ago

Bug Report

Today we were trying to time the Docker standalone cluster creation using TCE v0.7.0 and noticed that the Docker standalone cluster takes more than 30 minutes to install. About 37 minutes

We also noticed that the worker node that got created was deleted and a new worker node was spun up. We noticed this twice. Not sure what the reason is for this. So, basically, instead of creating a worker node just once, it creates them thrice or probably more times, we noticed 3. But only one exists at some point, older ones are getting deleted when newer worker nodes are created

We missed to get the logs from the controllers in the kind bootstrap cluster. We will try to get it and paste the logs here when trying again

Expected Behavior

Worker nodes once created ideally should not get deleted and recreated. And Docker standalone cluster should not take more than 10-15 minutes to install. In previous versions standalone cluster creation happened within 10-15 minutes

Steps to Reproduce the Bug

Run Docker standalone cluster

Environment Details

karuppiah7890 commented 3 years ago

cc @ShwethaKumbla

joshrosso commented 3 years ago

We also noticed that the worker node that got created was deleted and a new worker node was spun up. We noticed this twice. Not sure what the reason is for this. So, basically, instead of creating a worker node just once, it creates them thrice or probably more times, we noticed 3. But only one exists at some point, older ones are getting deleted when newer worker nodes are created

This behavior smells like an issue with the VM. Perhaps resource constraints or an issue with the container runtime determining it needs to "reap" or kill the container. If workers are being destroyed before joining the cluster correctly, I'd expect this to be the reason for the long time.

I'll download and run a v0.7.0 install now to do a quick sanity check.

joshrosso commented 3 years ago

Just ran a bootstrap on the v0.7.0 release and it took under 5 minutes. Details are below.

Time

4m12.352s

Logs

$ time tanzu standalone-cluster create -i docker testcluster
{            0 0 0s false false}
Downloading TKG compatibility file from 'projects-stg.registry.vmware.com/tkg/v1.4.0-zshippable/tkg-compatibility'
Downloading the TKG Bill of Materials (BOM) file from 'projects-stg.registry.vmware.com/tkg/tkg-bom:v1.4.0-zshippable'
Downloading the TKr Bill of Materials (BOM) file from 'projects-stg.registry.vmware.com/tkg/tkr-bom:v1.21.2_vmware.1-tkg.1-zshippable'

loading cluster config file at
cluster config file not provided using default config file at '/home/josh/.config/tanzu/tkg/cluster-config.yaml'
cluster config file does not exists. Creating new one at '/home/josh/.config/tanzu/tkg/cluster-config.yaml'

loaded coreprovider: cluster-api:v0.3.23, bootstrapprovider: kubeadm:v0.3.23, and cp-provider: kubeadm:v0.3.23
CEIP Opt-in status: true
timeout duration of at least 15 minutes is required, using default timeout 30m0s

Validating the pre-requisites...
Identity Provider not configured. Some authentication features won't work.

Setting up standalone cluster...
Validating configuration...
Using infrastructure provider docker:v0.3.23
Generating cluster configuration...
Setting up bootstrapper...
Fetching configuration for kind node image...
kindConfig:
 &{{Cluster kind.x-k8s.io/v1alpha4}  [{  map[] [{/var/run/docker.sock /var/run/docker.sock false false }] [] [] []}] { 0  100.96.0.0/11 100.64.0.0/13 false } map[] map[] [apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
imageRepository: projects.registry.vmware.com/tkg
etcd:
  local:
    imageRepository: projects.registry.vmware.com/tkg
    imageTag: v3.4.13_vmware.15
dns:
  type: CoreDNS
  imageRepository: projects.registry.vmware.com/tkg
  imageTag: v1.8.0_vmware.5] [] [] []}
Creating kind cluster: tkg-kind-c4n2vjgs9e9jnmp31hv0
Creating cluster "tkg-kind-c4n2vjgs9e9jnmp31hv0" ...
Ensuring node image (projects-stg.registry.vmware.com/tkg/kind/node:v1.21.2_vmware.1) ...
Image: projects-stg.registry.vmware.com/tkg/kind/node:v1.21.2_vmware.1 present locally
Preparing nodes ...
Writing configuration ...
Using the following kubeadm config for node tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:apiServer: certSANs: - localhost - 127.0.0.1 extraArgs: runtime-config: ""apiVersion: kubeadm.k8s.io/v1beta2clusterName: tkg-kind-c4n2vjgs9e9jnmp31hv0controlPlaneEndpoint: tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443controllerManager: extraArgs: enable-hostpath-provisioner: "true"dns: imageRepository: projects.registry.vmware.com/tkg imageTag: v1.8.0_vmware.5 type: CoreDNSetcd: local: imageRepository: projects.registry.vmware.com/tkg imageTag: v3.4.13_vmware.15imageRepository: projects.registry.vmware.com/tkgkind: ClusterConfigurationkubernetesVersion: v1.21.2+vmware.1networking: podSubnet: 100.96.0.0/11 serviceSubnet: 100.64.0.0/13scheduler: extraArgs: null---apiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- token: abcdef.0123456789abcdefkind: InitConfigurationlocalAPIEndpoint: advertiseAddress: 172.20.0.2 bindPort: 6443nodeRegistration: criSocket: unix:///run/containerd/containerd.sock kubeletExtraArgs: fail-swap-on: "false" node-ip: 172.20.0.2 node-labels: "" provider-id: kind://docker/tkg-kind-c4n2vjgs9e9jnmp31hv0/tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane---apiVersion: kubeadm.k8s.io/v1beta2controlPlane: localAPIEndpoint: advertiseAddress: 172.20.0.2 bindPort: 6443discovery: bootstrapToken: apiServerEndpoint: tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443 token: abcdef.0123456789abcdef unsafeSkipCAVerification: truekind: JoinConfigurationnodeRegistration: criSocket: unix:///run/containerd/containerd.sock kubeletExtraArgs: fail-swap-on: "false" node-ip: 172.20.0.2 node-labels: "" provider-id: kind://docker/tkg-kind-c4n2vjgs9e9jnmp31hv0/tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane---apiVersion: kubelet.config.k8s.io/v1beta1cgroupDriver: cgroupfsevictionHard: imagefs.available: 0% nodefs.available: 0% nodefs.inodesFree: 0%imageGCHighThresholdPercent: 100kind: KubeletConfiguration---apiVersion: kubeproxy.config.k8s.io/v1alpha1conntrack: maxPerCore: 0iptables: minSyncPeriod: 1skind: KubeProxyConfigurationmode: iptables
Starting control-plane ...
I0831 13:34:10.930114 172 initconfiguration.go:246] loading configuration from "/kind/kubeadm.conf"[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration[init] Using Kubernetes version: v1.21.2+vmware.1[certs] Using certificateDir folder "/etc/kubernetes/pki"I0831 13:34:10.937785 172 certs.go:110] creating a new certificate authority for ca[certs] Generating "ca" certificate and keyI0831 13:34:11.040117 172 certs.go:487] validating certificate period for ca certificate[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane] and IPs [100.64.0.1 172.20.0.2 127.0.0.1][certs] Generating "apiserver-kubelet-client" certificate and keyI0831 13:34:11.321232 172 certs.go:110] creating a new certificate authority for front-proxy-ca[certs] Generating "front-proxy-ca" certificate and keyI0831 13:34:11.424856 172 certs.go:487] validating certificate period for front-proxy-ca certificate[certs] Generating "front-proxy-client" certificate and keyI0831 13:34:11.796489 172 certs.go:110] creating a new certificate authority for etcd-ca[certs] Generating "etcd/ca" certificate and keyI0831 13:34:12.124991 172 certs.go:487] validating certificate period for etcd/ca certificate[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane] and IPs [172.20.0.2 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane] and IPs [172.20.0.2 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and keyI0831 13:34:12.563564 172 certs.go:76] creating new public/private key files for signing service account users[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"I0831 13:34:12.648373 172 kubeconfig.go:101] creating kubeconfig file for admin.conf[kubeconfig] Writing "admin.conf" kubeconfig fileI0831 13:34:12.728568 172 kubeconfig.go:101] creating kubeconfig file for kubelet.conf[kubeconfig] Writing "kubelet.conf" kubeconfig fileI0831 13:34:12.850294 172 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf[kubeconfig] Writing "controller-manager.conf" kubeconfig fileI0831 13:34:13.168925 172 kubeconfig.go:101] creating kubeconfig file for scheduler.conf[kubeconfig] Writing "scheduler.conf" kubeconfig fileI0831 13:34:13.282906 172 kubelet.go:63] Stopping the kubelet[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"I0831 13:34:13.337783 172 manifests.go:96] [control-plane] getting StaticPodSpecsI0831 13:34:13.338069 172 certs.go:487] validating certificate period for CA certificateI0831 13:34:13.338126 172 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"I0831 13:34:13.338133 172 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"I0831 13:34:13.338138 172 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"I0831 13:34:13.338141 172 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"I0831 13:34:13.338143 172 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"I0831 13:34:13.342268 172 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"I0831 13:34:13.342284 172 manifests.go:96] [control-plane] getting StaticPodSpecs[control-plane] Creating static Pod manifest for "kube-controller-manager"I0831 13:34:13.342474 172 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"I0831 13:34:13.342481 172 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"I0831 13:34:13.342484 172 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"I0831 13:34:13.342486 172 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"I0831 13:34:13.342489 172 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"I0831 13:34:13.342491 172 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"I0831 13:34:13.342494 172 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"I0831 13:34:13.342964 172 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"I0831 13:34:13.342973 172 manifests.go:96] [control-plane] getting StaticPodSpecs[control-plane] Creating static Pod manifest for "kube-scheduler"I0831 13:34:13.343141 172 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"I0831 13:34:13.343442 172 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"I0831 13:34:13.344090 172 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"I0831 13:34:13.344168 172 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthyI0831 13:34:13.345289 172 loader.go:372] Config loaded from file: /etc/kubernetes/admin.conf[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0sI0831 13:34:13.346632 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 0 millisecondsI0831 13:34:13.847823 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 0 millisecondsI0831 13:34:14.348001 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 0 millisecondsI0831 13:34:14.848196 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 0 millisecondsI0831 13:34:15.349070 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 2 millisecondsI0831 13:34:15.849821 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:34:16.349566 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:34:16.849286 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:34:17.348714 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:34:17.847336 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 0 millisecondsI0831 13:34:18.347577 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 0 millisecondsI0831 13:34:18.849240 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:34:19.348845 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:34:19.849660 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:34:20.349124 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:34:20.847920 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 0 millisecondsI0831 13:34:21.385058 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:34:21.848248 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s in 0 millisecondsI0831 13:34:25.480361 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 3133 millisecondsI0831 13:34:25.848579 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 0 millisecondsI0831 13:34:26.351277 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 3 millisecondsI0831 13:34:26.848167 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds[apiclient] All control plane components are healthy after 14.005630 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" NamespaceI0831 13:34:27.351523 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/healthz?timeout=10s 200 OK in 4 millisecondsI0831 13:34:27.351671 172 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMapI0831 13:34:27.361879 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 6 millisecondsI0831 13:34:27.367472 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 4 millisecondsI0831 13:34:27.375754 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 6 millisecondsI0831 13:34:27.377296 172 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the clusterI0831 13:34:27.383730 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 4 millisecondsI0831 13:34:27.389457 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 5 millisecondsI0831 13:34:27.394690 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 4 millisecondsI0831 13:34:27.394862 172 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane nodeI0831 13:34:27.394886 172 patchnode.go:30] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane" as an annotationI0831 13:34:27.902155 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/nodes/tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane?timeout=10s 200 OK in 6 millisecondsI0831 13:34:27.918101 172 round_trippers.go:454] PATCH https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/nodes/tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane?timeout=10s 200 OK in 9 milliseconds[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]I0831 13:34:28.423874 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/nodes/tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane?timeout=10s 200 OK in 4 millisecondsI0831 13:34:28.437229 172 round_trippers.go:454] PATCH https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/nodes/tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane?timeout=10s 200 OK in 10 milliseconds[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC RolesI0831 13:34:28.441947 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef?timeout=10s 404 Not Found in 3 millisecondsI0831 13:34:28.450358 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/namespaces/kube-system/secrets?timeout=10s 201 Created in 6 milliseconds[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodesI0831 13:34:28.461339 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 9 millisecondsI0831 13:34:28.466133 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 3 milliseconds[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentialsI0831 13:34:28.469638 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap TokenI0831 13:34:28.472672 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the clusterI0831 13:34:28.475711 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespaceI0831 13:34:28.475860 172 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfigI0831 13:34:28.476544 172 loader.go:372] Config loaded from file: /etc/kubernetes/admin.confI0831 13:34:28.476565 172 clusterinfo.go:56] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfigI0831 13:34:28.477129 172 clusterinfo.go:68] [bootstrap-token] creating/updating ConfigMap in kube-public namespaceI0831 13:34:28.478884 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/namespaces/kube-public/configmaps?timeout=10s 201 Created in 1 millisecondsI0831 13:34:28.478968 172 clusterinfo.go:82] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespaceI0831 13:34:28.480343 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s 201 Created in 1 millisecondsI0831 13:34:28.481512 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s 201 Created in 1 millisecondsI0831 13:34:28.481601 172 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and keyI0831 13:34:28.481896 172 loader.go:372] Config loaded from file: /etc/kubernetes/kubelet.confI0831 13:34:28.482168 172 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotationI0831 13:34:28.597004 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns 200 OK in 3 millisecondsI0831 13:34:28.601075 172 round_trippers.go:454] GET https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s 404 Not Found in 1 millisecondsI0831 13:34:28.602359 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 1 millisecondsI0831 13:34:28.603902 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 1 millisecondsI0831 13:34:28.605396 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 millisecondsI0831 13:34:28.607713 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 1 millisecondsI0831 13:34:28.622443 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?timeout=10s 201 Created in 8 millisecondsI0831 13:34:28.628250 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/namespaces/kube-system/services?timeout=10s 201 Created in 4 milliseconds[addons] Applied essential addon: CoreDNSI0831 13:34:28.629870 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 1 millisecondsI0831 13:34:28.632464 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 1 millisecondsI0831 13:34:28.642463 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/apps/v1/namespaces/kube-system/daemonsets?timeout=10s 201 Created in 5 millisecondsI0831 13:34:28.644235 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 millisecondsI0831 13:34:28.654203 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 1 millisecondsI0831 13:34:28.852951 172 request.go:600] Waited for 198.490451ms due to client-side throttling, not priority and fairness, request: POST:https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10sI0831 13:34:28.859422 172 round_trippers.go:454] POST https://tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 6 milliseconds[addons] Applied essential addon: kube-proxyI0831 13:34:28.861219 172 loader.go:372] Config loaded from file: /etc/kubernetes/admin.confI0831 13:34:28.862783 172 loader.go:372] Config loaded from file: /etc/kubernetes/admin.confYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root: kubeadm join tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443 --token <value withheld> \--discovery-token-ca-cert-hash sha256:ee9b31079ed7e4a6a48e741af21e7f3ba55121a268dba9d83011cffef6617571 \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join tkg-kind-c4n2vjgs9e9jnmp31hv0-control-plane:6443 --token <value withheld> \--discovery-token-ca-cert-hash sha256:ee9b31079ed7e4a6a48e741af21e7f3ba55121a268dba9d83011cffef6617571
Installing CNI ...
Installing StorageClass ...
Waiting 2m0s for control-plane = Ready ...
Ready after 25s
Bootstrapper created. Kubeconfig: /home/josh/.kube-tkg/tmp/config_mPCOuflA
Installing providers on bootstrapper...
installed  Component=="cluster-api"  Type=="CoreProvider"  Version=="v0.3.23"
installed  Component=="kubeadm"  Type=="BootstrapProvider"  Version=="v0.3.23"
installed  Component=="kubeadm"  Type=="ControlPlaneProvider"  Version=="v0.3.23"
installed  Component=="docker"  Type=="InfrastructureProvider"  Version=="v0.3.23"
Waiting for provider cluster-api
Waiting for provider bootstrap-kubeadm
Waiting for provider control-plane-kubeadm
Waiting for provider infrastructure-docker
Waiting for resource capi-kubeadm-control-plane-controller-manager of type *v1.Deployment to be up and running
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
Waiting for resource capd-controller-manager of type *v1.Deployment to be up and running
pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying
Waiting for resource capi-kubeadm-bootstrap-controller-manager of type *v1.Deployment to be up and running
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
Waiting for resource capi-controller-manager of type *v1.Deployment to be up and running
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
Waiting for resource capi-kubeadm-control-plane-controller-manager of type *v1.Deployment to be up and running
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-webhook-system', retrying
pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
Passed waiting on provider control-plane-kubeadm after 15.057796044s
pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying
Waiting for resource capi-kubeadm-bootstrap-controller-manager of type *v1.Deployment to be up and running
Passed waiting on provider bootstrap-kubeadm after 15.110270291s
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
Passed waiting on provider infrastructure-docker after 20.072493728s
Waiting for resource capi-controller-manager of type *v1.Deployment to be up and running
Passed waiting on provider cluster-api after 20.143868941s
Success waiting on all providers.
Start creating standalone cluster...
patch cluster object with operation status:
        {
                "metadata": {
                        "annotations": {
                                "TKGOperationInfo" : "{\"Operation\":\"Create\",\"OperationStartTimestamp\":\"2021-08-31 13:35:38.889844491 +0000 UTC\",\"OperationTimeout\":1800}",
                                "TKGOperationLastObservedTimestamp" : "2021-08-31 13:35:38.889844491 +0000 UTC"
                        }
                }
        }
regionContext:
{testcluster kind-tkg-kind-c4n2vjgs9e9jnmp31hv0 /home/josh/.kube-tkg/tmp/config_mPCOuflA Failed false}cluster state is unchanged 1
[cluster control plane is still being initialized, cluster infrastructure is still being provisioned], retrying
cluster control plane is still being initialized, retrying
cluster state is unchanged 1
cluster control plane is still being initialized, retrying
Getting secret for cluster
Waiting for resource testcluster-kubeconfig of type *v1.Secret to be up and running
Saving standalone cluster kubeconfig into /home/josh/.kube/config
Waiting for bootstrap cluster to get ready for save ...
Waiting for resource testcluster of type *v1alpha3.Cluster to be up and running
Waiting for resources type *v1alpha3.MachineDeploymentList to be up and running
worker nodes are still being created for MachineDeployment 'testcluster-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'testcluster-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'testcluster-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'testcluster-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'testcluster-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'testcluster-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'testcluster-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'testcluster-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'testcluster-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'testcluster-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
Waiting for resources type *v1alpha3.MachineList to be up and running
Waiting for addons installation...
Waiting for resources type *v1alpha3.ClusterResourceSetList to be up and running
Waiting for resource antrea-controller of type *v1.Deployment to be up and running
pods are not yet running for deployment 'antrea-controller' in namespace 'kube-system', retrying
Moving all Cluster API objects from bootstrap cluster to standalone cluster...
Context set for standalone cluster testcluster as 'testcluster-admin@testcluster'.
Deleting kind cluster: tkg-kind-c4n2vjgs9e9jnmp31hv0

Standalone cluster created!

real    4m12.352s
user    0m12.427s
sys     0m3.778s

Specs

kernel

uname -a
Linux tugboat 5.13.12-arch1-1 #1 SMP PREEMPT Wed, 18 Aug 2021 20:49:03 +0000 x86_64 GNU/Linux

cpu

$ lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         39 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  8
  On-line CPU(s) list:   0-7
Vendor ID:               GenuineIntel
  Model name:            Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz
    CPU family:          6
    Model:               60
    Thread(s) per core:  2
    Core(s) per socket:  4
    Socket(s):           1
    Stepping:            3
    CPU max MHz:         4400.0000
    CPU min MHz:         800.0000
    BogoMIPS:            8003.08
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
                          pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 mo
                         nitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16
                         c rdrand lahf_lm abm cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc
                         _adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d
Virtualization features:
  Virtualization:        VT-x
Caches (sum of all):
  L1d:                   128 KiB (4 instances)
  L1i:                   128 KiB (4 instances)
  L2:                    1 MiB (4 instances)
  L3:                    8 MiB (1 instance)
NUMA:
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-7
Vulnerabilities:
  Itlb multihit:         KVM: Mitigation: VMX disabled
  L1tf:                  Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
  Mds:                   Mitigation; Clear CPU buffers; SMT vulnerable
  Meltdown:              Mitigation; PTI
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl and seccomp
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
  Srbds:                 Mitigation; Microcode
  Tsx async abort:       Not affected

memory

josh @ tugboat (~) []
$ cat /proc/meminfo
MemTotal:       32734808 kB
MemFree:         6629736 kB
MemAvailable:   25846300 kB
Buffers:         1309536 kB
Cached:         21447600 kB

docker info

$ docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Build with BuildKit (Docker Inc., v0.6.1-docker)

Server:
 Containers: 3
  Running: 3
  Paused: 0
  Stopped: 0
 Images: 23
 Server Version: 20.10.8
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: false
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 72cec4be58a9eb6b2910f5d10f1c01ca47d231c0.m
 runc version: v1.0.2-0-g52b36a2d
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.13.12-arch1-1
 Operating System: Arch Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 31.22GiB
 Name: tugboat
 ID: TZ5I:5QJ6:4PHO:YAQ3:NMPY:52A3:J7NN:JUNV:WZPR:AX5O:S2BM:DGN2
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Username: joshrosso
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: API is accessible on http://0.0.0.0:2375 without encryption.
         Access to the remote API is equivalent to root access on the host. Refer
         to the 'Docker daemon attack surface' section in the documentation for
         more information: https://docs.docker.com/go/attack-surface/
karuppiah7890 commented 3 years ago

I'll post the specs and tanzu standalone-cluster-create logs here

Logs

Full Logs (too big. click to expand) ```bash tce $ make tce-docker-standalone-cluster-e2e-test test/docker/run-tce-docker-standalone-cluster.sh +++ dirname test/docker/run-tce-docker-standalone-cluster.sh ++ cd test/docker ++ pwd + MY_DIR=/Users/karuppiahn/projects/github.com/vmware-tanzu/tce/test/docker + guest_cluster_name=guest-cluster-13202 + CLUSTER_PLAN=dev + CLUSTER_NAME=guest-cluster-13202 + tanzu standalone-cluster create guest-cluster-13202 -i docker -v 10 { 0 0 0s false false} Downloading TKG compatibility file from 'projects-stg.registry.vmware.com/tkg/v1.4.0-zshippable/tkg-compatibility' Downloading the TKG Bill of Materials (BOM) file from 'projects-stg.registry.vmware.com/tkg/tkg-bom:v1.4.0-zshippable' BOM file "/Users/karuppiahn/.config/tanzu/tkg/bom/tkg-bom-v1.4.0-zshippable.yaml" already exist, so skipped saving the downloaded BOM file Downloading the TKr Bill of Materials (BOM) file from 'projects-stg.registry.vmware.com/tkg/tkr-bom:v1.21.2_vmware.1-tkg.1-zshippable' BOM file "/Users/karuppiahn/.config/tanzu/tkg/bom/tkr-bom-v1.21.2+vmware.1-tkg.1-zshippable.yaml" already exist, so skipped saving the downloaded BOM file loading cluster config file at cluster config file not provided using default config file at '/Users/karuppiahn/.config/tanzu/tkg/cluster-config.yaml' loaded coreprovider: cluster-api:v0.3.22, bootstrapprovider: kubeadm:v0.3.22, and cp-provider: kubeadm:v0.3.22 CEIP Opt-in status: true timeout duration of at least 15 minutes is required, using default timeout 30m0s Validating the pre-requisites... Identity Provider not configured. Some authentication features won't work. Setting up standalone cluster... Validating configuration... Using infrastructure provider docker:v0.3.22 Generating cluster configuration... Setting up bootstrapper... Fetching configuration for kind node image... kindConfig: &{{Cluster kind.x-k8s.io/v1alpha4} [{ map[] [{/var/run/docker.sock /var/run/docker.sock false false }] [] [] []}] { 0 100.96.0.0/11 100.64.0.0/13 false } map[] map[] [apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration imageRepository: projects.registry.vmware.com/tkg etcd: local: imageRepository: projects.registry.vmware.com/tkg imageTag: v3.4.13_vmware.15 dns: type: CoreDNS imageRepository: projects.registry.vmware.com/tkg imageTag: v1.8.0_vmware.5] [] [] []} Creating kind cluster: tkg-kind-c4n220d94813ujosgmkg Creating cluster "tkg-kind-c4n220d94813ujosgmkg" ... Ensuring node image (projects-stg.registry.vmware.com/tkg/kind/node:v1.21.2_vmware.1) ... Image: projects-stg.registry.vmware.com/tkg/kind/node:v1.21.2_vmware.1 present locally Preparing nodes ... Writing configuration ... Using the following kubeadm config for node tkg-kind-c4n220d94813ujosgmkg-control-plane:apiServer: certSANs: - localhost - 127.0.0.1 extraArgs: runtime-config: ""apiVersion: kubeadm.k8s.io/v1beta2clusterName: tkg-kind-c4n220d94813ujosgmkgcontrolPlaneEndpoint: tkg-kind-c4n220d94813ujosgmkg-control-plane:6443controllerManager: extraArgs: enable-hostpath-provisioner: "true"dns: imageRepository: projects.registry.vmware.com/tkg imageTag: v1.8.0_vmware.5 type: CoreDNSetcd: local: imageRepository: projects.registry.vmware.com/tkg imageTag: v3.4.13_vmware.15imageRepository: projects.registry.vmware.com/tkgkind: ClusterConfigurationkubernetesVersion: v1.21.2+vmware.1networking: podSubnet: 100.96.0.0/11 serviceSubnet: 100.64.0.0/13scheduler: extraArgs: null---apiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- token: abcdef.0123456789abcdefkind: InitConfigurationlocalAPIEndpoint: advertiseAddress: 172.18.0.2 bindPort: 6443nodeRegistration: criSocket: unix:///run/containerd/containerd.sock kubeletExtraArgs: fail-swap-on: "false" node-ip: 172.18.0.2 node-labels: "" provider-id: kind://docker/tkg-kind-c4n220d94813ujosgmkg/tkg-kind-c4n220d94813ujosgmkg-control-plane---apiVersion: kubeadm.k8s.io/v1beta2controlPlane: localAPIEndpoint: advertiseAddress: 172.18.0.2 bindPort: 6443discovery: bootstrapToken: apiServerEndpoint: tkg-kind-c4n220d94813ujosgmkg-control-plane:6443 token: abcdef.0123456789abcdef unsafeSkipCAVerification: truekind: JoinConfigurationnodeRegistration: criSocket: unix:///run/containerd/containerd.sock kubeletExtraArgs: fail-swap-on: "false" node-ip: 172.18.0.2 node-labels: "" provider-id: kind://docker/tkg-kind-c4n220d94813ujosgmkg/tkg-kind-c4n220d94813ujosgmkg-control-plane---apiVersion: kubelet.config.k8s.io/v1beta1cgroupDriver: cgroupfsevictionHard: imagefs.available: 0% nodefs.available: 0% nodefs.inodesFree: 0%imageGCHighThresholdPercent: 100kind: KubeletConfiguration---apiVersion: kubeproxy.config.k8s.io/v1alpha1conntrack: maxPerCore: 0iptables: minSyncPeriod: 1skind: KubeProxyConfigurationmode: iptables Starting control-plane ... I0831 12:31:04.991884 195 initconfiguration.go:246] loading configuration from "/kind/kubeadm.conf"[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration[init] Using Kubernetes version: v1.21.2+vmware.1[certs] Using certificateDir folder "/etc/kubernetes/pki"I0831 12:31:05.004374 195 certs.go:110] creating a new certificate authority for ca[certs] Generating "ca" certificate and keyI0831 12:31:05.240703 195 certs.go:487] validating certificate period for ca certificate[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost tkg-kind-c4n220d94813ujosgmkg-control-plane] and IPs [100.64.0.1 172.18.0.2 127.0.0.1][certs] Generating "apiserver-kubelet-client" certificate and keyI0831 12:31:05.648491 195 certs.go:110] creating a new certificate authority for front-proxy-ca[certs] Generating "front-proxy-ca" certificate and keyI0831 12:31:06.047277 195 certs.go:487] validating certificate period for front-proxy-ca certificate[certs] Generating "front-proxy-client" certificate and keyI0831 12:31:06.153074 195 certs.go:110] creating a new certificate authority for etcd-ca[certs] Generating "etcd/ca" certificate and keyI0831 12:31:06.438061 195 certs.go:487] validating certificate period for etcd/ca certificate[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost tkg-kind-c4n220d94813ujosgmkg-control-plane] and IPs [172.18.0.2 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost tkg-kind-c4n220d94813ujosgmkg-control-plane] and IPs [172.18.0.2 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and keyI0831 12:31:07.810136 195 certs.go:76] creating new public/private key files for signing service account users[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"I0831 12:31:08.027726 195 kubeconfig.go:101] creating kubeconfig file for admin.conf[kubeconfig] Writing "admin.conf" kubeconfig fileI0831 12:31:08.211167 195 kubeconfig.go:101] creating kubeconfig file for kubelet.conf[kubeconfig] Writing "kubelet.conf" kubeconfig fileI0831 12:31:08.384876 195 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf[kubeconfig] Writing "controller-manager.conf" kubeconfig fileI0831 12:31:09.099414 195 kubeconfig.go:101] creating kubeconfig file for scheduler.conf[kubeconfig] Writing "scheduler.conf" kubeconfig fileI0831 12:31:09.164179 195 kubelet.go:63] Stopping the kubelet[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"I0831 12:31:09.247527 195 manifests.go:96] [control-plane] getting StaticPodSpecsI0831 12:31:09.248315 195 certs.go:487] validating certificate period for CA certificateI0831 12:31:09.248417 195 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"I0831 12:31:09.248450 195 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"I0831 12:31:09.248456 195 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"I0831 12:31:09.248459 195 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"I0831 12:31:09.248462 195 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"I0831 12:31:09.255840 195 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"I0831 12:31:09.255902 195 manifests.go:96] [control-plane] getting StaticPodSpecs[control-plane] Creating static Pod manifest for "kube-controller-manager"I0831 12:31:09.256250 195 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"I0831 12:31:09.256302 195 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"I0831 12:31:09.256308 195 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"I0831 12:31:09.256311 195 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"I0831 12:31:09.256314 195 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"I0831 12:31:09.256317 195 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"I0831 12:31:09.256320 195 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"I0831 12:31:09.257144 195 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"I0831 12:31:09.257182 195 manifests.go:96] [control-plane] getting StaticPodSpecsI0831 12:31:09.257619 195 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"I0831 12:31:09.258297 195 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"I0831 12:31:09.259151 195 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"I0831 12:31:09.259205 195 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthyI0831 12:31:09.260033 195 loader.go:372] Config loaded from file: /etc/kubernetes/admin.conf[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0sI0831 12:31:09.263127 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:09.767059 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:10.264699 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:10.766051 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:11.267406 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:11.765864 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:12.267099 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:12.769946 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:13.269743 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:13.768528 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:14.265004 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:14.769025 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:15.265391 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:15.765080 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 0 millisecondsI0831 12:31:16.265239 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:16.765891 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:17.265617 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 12:31:20.805399 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 3041 millisecondsI0831 12:31:21.265792 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 millisecondsI0831 12:31:21.749016 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 millisecondsI0831 12:31:22.248838 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/healthz?timeout=10s 200 OK in 1 millisecondsI0831 12:31:22.248954 195 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap[apiclient] All control plane components are healthy after 13.004147 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" NamespaceI0831 12:31:22.253346 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 millisecondsI0831 12:31:22.256497 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 2 millisecondsI0831 12:31:22.259642 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 2 milliseconds[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the clusterI0831 12:31:22.260101 195 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMapI0831 12:31:22.262536 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 1 millisecondsI0831 12:31:22.264869 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 2 millisecondsI0831 12:31:22.267264 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 2 millisecondsI0831 12:31:22.267568 195 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane nodeI0831 12:31:22.267620 195 patchnode.go:30] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "tkg-kind-c4n220d94813ujosgmkg-control-plane" as an annotationI0831 12:31:22.771257 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/nodes/tkg-kind-c4n220d94813ujosgmkg-control-plane?timeout=10s 200 OK in 2 millisecondsI0831 12:31:22.777061 195 round_trippers.go:454] PATCH https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/nodes/tkg-kind-c4n220d94813ujosgmkg-control-plane?timeout=10s 200 OK in 3 milliseconds[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node tkg-kind-c4n220d94813ujosgmkg-control-plane as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node tkg-kind-c4n220d94813ujosgmkg-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]I0831 12:31:23.281300 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/nodes/tkg-kind-c4n220d94813ujosgmkg-control-plane?timeout=10s 200 OK in 2 millisecondsI0831 12:31:23.286234 195 round_trippers.go:454] PATCH https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/nodes/tkg-kind-c4n220d94813ujosgmkg-control-plane?timeout=10s 200 OK in 3 milliseconds[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC RolesI0831 12:31:23.288317 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef?timeout=10s 404 Not Found in 1 millisecondsI0831 12:31:23.292269 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/namespaces/kube-system/secrets?timeout=10s 201 Created in 2 milliseconds[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodesI0831 12:31:23.297399 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 3 millisecondsI0831 12:31:23.300960 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentialsI0831 12:31:23.304338 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap TokenI0831 12:31:23.306819 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the clusterI0831 12:31:23.309063 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 millisecondsI0831 12:31:23.309175 195 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespaceI0831 12:31:23.309602 195 loader.go:372] Config loaded from file: /etc/kubernetes/admin.confI0831 12:31:23.309642 195 clusterinfo.go:56] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfigI0831 12:31:23.309951 195 clusterinfo.go:68] [bootstrap-token] creating/updating ConfigMap in kube-public namespaceI0831 12:31:23.312331 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/namespaces/kube-public/configmaps?timeout=10s 201 Created in 2 millisecondsI0831 12:31:23.312517 195 clusterinfo.go:82] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespaceI0831 12:31:23.314847 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s 201 Created in 2 millisecondsI0831 12:31:23.317155 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s 201 Created in 2 millisecondsI0831 12:31:23.317686 195 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and keyI0831 12:31:23.318176 195 loader.go:372] Config loaded from file: /etc/kubernetes/kubelet.confI0831 12:31:23.318580 195 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotationI0831 12:31:23.420539 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns 200 OK in 2 millisecondsI0831 12:31:23.425688 195 round_trippers.go:454] GET https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s 404 Not Found in 1 millisecondsI0831 12:31:23.428094 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 millisecondsI0831 12:31:23.432327 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 3 millisecondsI0831 12:31:23.434411 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 millisecondsI0831 12:31:23.437456 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 2 millisecondsI0831 12:31:23.449875 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?timeout=10s 201 Created in 7 millisecondsI0831 12:31:23.455839 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/namespaces/kube-system/services?timeout=10s 201 Created in 4 milliseconds[addons] Applied essential addon: CoreDNSI0831 12:31:23.457936 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 1 millisecondsI0831 12:31:23.481587 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 millisecondsI0831 12:31:23.491880 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/apps/v1/namespaces/kube-system/daemonsets?timeout=10s 201 Created in 6 millisecondsI0831 12:31:23.494120 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 millisecondsI0831 12:31:23.496073 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 1 millisecondsI0831 12:31:23.694758 195 request.go:600] Waited for 198.488476ms due to client-side throttling, not priority and fairness, request: POST:https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10sI0831 12:31:23.697278 195 round_trippers.go:454] POST https://tkg-kind-c4n220d94813ujosgmkg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 2 milliseconds[addons] Applied essential addon: kube-proxyI0831 12:31:23.697913 195 loader.go:372] Config loaded from file: /etc/kubernetes/admin.confI0831 12:31:23.698434 195 loader.go:372] Config loaded from file: /etc/kubernetes/admin.confYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root: kubeadm join tkg-kind-c4n220d94813ujosgmkg-control-plane:6443 --token \--discovery-token-ca-cert-hash sha256:9833d1ba7c828399867e62ddd5e5eb08e7aef902a13d1fcb5abb26908e0fbb9c \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join tkg-kind-c4n220d94813ujosgmkg-control-plane:6443 --token \--discovery-token-ca-cert-hash sha256:9833d1ba7c828399867e62ddd5e5eb08e7aef902a13d1fcb5abb26908e0fbb9c Installing CNI ... Installing StorageClass ... Waiting 2m0s for control-plane = Ready ... Ready after 17s Bootstrapper created. Kubeconfig: /Users/karuppiahn/.kube-tkg/tmp/config_6jk9z9hz Installing providers on bootstrapper... installed Component=="cluster-api" Type=="CoreProvider" Version=="v0.3.22" installed Component=="kubeadm" Type=="BootstrapProvider" Version=="v0.3.22" installed Component=="kubeadm" Type=="ControlPlaneProvider" Version=="v0.3.22" installed Component=="docker" Type=="InfrastructureProvider" Version=="v0.3.22" Waiting for provider cluster-api Waiting for provider infrastructure-docker Waiting for provider control-plane-kubeadm Waiting for provider bootstrap-kubeadm Waiting for resource capd-controller-manager of type *v1.Deployment to be up and running pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying Waiting for resource capi-kubeadm-control-plane-controller-manager of type *v1.Deployment to be up and running pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying Waiting for resource capi-kubeadm-bootstrap-controller-manager of type *v1.Deployment to be up and running pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying Waiting for resource capi-controller-manager of type *v1.Deployment to be up and running pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying Waiting for resource capi-controller-manager of type *v1.Deployment to be up and running pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-webhook-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-webhook-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-webhook-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-webhook-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-webhook-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-webhook-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-webhook-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying Waiting for resource capi-kubeadm-bootstrap-controller-manager of type *v1.Deployment to be up and running Passed waiting on provider bootstrap-kubeadm after 2m5.111336157s pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-webhook-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-webhook-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying Passed waiting on provider cluster-api after 2m15.137175625s pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying Waiting for resource capi-kubeadm-control-plane-controller-manager of type *v1.Deployment to be up and running Passed waiting on provider control-plane-kubeadm after 2m50.057571136s pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying Passed waiting on provider infrastructure-docker after 3m40.046845352s Success waiting on all providers. Start creating standalone cluster... patch cluster object with operation status: { "metadata": { "annotations": { "TKGOperationInfo" : "{\"Operation\":\"Create\",\"OperationStartTimestamp\":\"2021-08-31 12:38:14.448382 +0000 UTC\",\"OperationTimeout\":1800}", "TKGOperationLastObservedTimestamp" : "2021-08-31 12:38:14.448382 +0000 UTC" } } } regionContext: {guest-cluster-13202 kind-tkg-kind-c4n220d94813ujosgmkg /Users/karuppiahn/.kube-tkg/tmp/config_6jk9z9hz Failed false}cluster state is unchanged 1 [cluster control plane is still being initialized, cluster infrastructure is still being provisioned], retrying cluster control plane is still being initialized, retrying cluster state is unchanged 1 cluster control plane is still being initialized, retrying Getting secret for cluster Waiting for resource guest-cluster-13202-kubeconfig of type *v1.Secret to be up and running Mac and CAPD environment detected, fixing Kubeconfig Saving standalone cluster kubeconfig into /Users/karuppiahn/.kube/config Waiting for bootstrap cluster to get ready for save ... Waiting for resource guest-cluster-13202 of type *v1alpha3.Cluster to be up and running Waiting for resources type *v1alpha3.MachineDeploymentList to be up and running worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'guest-cluster-13202-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying Waiting for resources type *v1alpha3.MachineList to be up and running Waiting for addons installation... Waiting for resources type *v1alpha3.ClusterResourceSetList to be up and running Waiting for resource antrea-controller of type *v1.Deployment to be up and running Moving all Cluster API objects from bootstrap cluster to standalone cluster... Context set for standalone cluster guest-cluster-13202 as 'guest-cluster-13202-admin@guest-cluster-13202'. Deleting kind cluster: tkg-kind-c4n220d94813ujosgmkg Standalone cluster created! real 37m39.089s user 0m25.193s sys 0m11.577s + /Users/karuppiahn/projects/github.com/vmware-tanzu/tce/test/docker/check-tce-cluster-creation.sh guest-cluster-13202-admin@guest-cluster-13202 + kube_context=guest-cluster-13202-admin@guest-cluster-13202 + '[' -z guest-cluster-13202-admin@guest-cluster-13202 ']' + kubectl config use-context guest-cluster-13202-admin@guest-cluster-13202 Switched to context "guest-cluster-13202-admin@guest-cluster-13202". + kubectl cluster-info Kubernetes control plane is running at https://127.0.0.1:42407 CoreDNS is running at https://127.0.0.1:42407/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. + kubectl get nodes NAME STATUS ROLES AGE VERSION guest-cluster-13202-control-plane-qbplg Ready control-plane,master 29m v1.21.2+vmware.1-360497810732255795 guest-cluster-13202-md-0-6c9cddcbff-xjmwb Ready 4m28s v1.21.2+vmware.1-360497810732255795 + kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system antrea-agent-8p6xk 1/2 Running 0 4m28s kube-system antrea-agent-zcfkm 2/2 Running 0 27m kube-system antrea-controller-58cdb9dc6d-76qdf 1/1 Running 0 27m kube-system coredns-8dcb5c56b-pfm8b 1/1 Running 0 29m kube-system coredns-8dcb5c56b-v2gl2 1/1 Running 0 29m kube-system etcd-guest-cluster-13202-control-plane-qbplg 1/1 Running 0 29m kube-system kube-apiserver-guest-cluster-13202-control-plane-qbplg 1/1 Running 0 29m kube-system kube-controller-manager-guest-cluster-13202-control-plane-qbplg 1/1 Running 0 29m kube-system kube-proxy-ftbp9 1/1 Running 0 4m28s kube-system kube-proxy-gmkwq 1/1 Running 0 29m kube-system kube-scheduler-guest-cluster-13202-control-plane-qbplg 1/1 Running 0 29m tkg-system kapp-controller-699959678f-vtls7 0/1 ContainerCreating 0 4m46s tkg-system tanzu-addons-controller-manager-65ddd5cc5d-vrgpx 0/1 Running 4 18m tkg-system tanzu-capabilities-controller-manager-547cfb7b99-dhq4n 1/1 Running 0 29m tkr-system tkr-controller-manager-759d6c7d6b-67skk 0/1 Pending 0 28m ```

Specs

Kernel

/ # uname -a
Linux docker-desktop 5.10.47-linuxkit #1 SMP Sat Jul 3 21:51:47 UTC 2021 x86_64 Linux

Memory

cat /proc/meminfo  | head
MemTotal:       10202644 kB
MemFree:         7551628 kB
MemAvailable:    9078736 kB
Buffers:          501260 kB
Cached:          1604624 kB
SwapCached:          844 kB
Active:          1618496 kB
Inactive:         822476 kB
Active(anon):      75436 kB
Inactive(anon):   642136 kB

CPU

/ # cat /proc/cpuinfo | grep processor
processor       : 0
processor       : 1
processor       : 2
processor       : 3
processor       : 4
processor       : 5
processor       : 6
processor       : 7

/ # cat /proc/cpuinfo | head -n 27
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 158
model name      : Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
stepping        : 13
cpu MHz         : 2400.000
cache size      : 16384 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 22
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht pbe syscall nx pdpe1gb lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq dtes64 ds_cpl ssse3 sdbg fma cx16 xtpr pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase bmi1 avx2 bmi2 erms xsaveopt arat
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbds
bogomips        : 4800.00
clflush size    : 64
cache_alignment : 64
address sizes   : 39 bits physical, 48 bits virtual
power management:

processor       : 1

Docker Info

$ docker info

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Build with BuildKit (Docker Inc., v0.6.1-docker)
  compose: Docker Compose (Docker Inc., v2.0.0-rc.1)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 17
 Server Version: 20.10.8
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e25210fe30a0a703442421b0f60afac609f950a3
 runc version: v1.0.1-0-g4144b63
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.10.47-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 9.73GiB
 Name: docker-desktop
 ID: KXRN:XF6I:DAJI:RZQP:ZQWZ:MA5E:OWVA:AY4P:PQI7:2UUX:PVBU:LSUL
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
karuppiah7890 commented 3 years ago

Attaching a screenshot of my Docker Desktop GUI to show specs alternatively

Screenshot 2021-08-31 at 7 33 23 PM

Let me know if this is not enough. I see above memory spec using ~ 32GB. I usually give less than my complete system spec assuming it's enough and also based on docs

jpmcb commented 3 years ago

Another data point - just ran a similar test to Josh on my macbook, pulled the v0.7.0 release :

Note that MacOS doesn't have many of the Linux utilities (like free, lscpu, etc.)

Also of note, I'm on a fiber internet connection, so the time for image pulls is negligible

Time

7:34.83s

Logs

Expand for logs

``` ❯ time tanzu standalone-cluster create johns-test -i docker { 0 0 0s false false} Downloading TKG compatibility file from 'projects-stg.registry.vmware.com/tkg/v1.4.0-zshippable/tkg-compatibility' Downloading the TKG Bill of Materials (BOM) file from 'projects-stg.registry.vmware.com/tkg/tkg-bom:v1.4.0-zshippable' BOM file "/Users/jmcbride/.config/tanzu/tkg/bom/tkg-bom-v1.4.0-zshippable.yaml" already exist, so skipped saving the downloaded BOM file Downloading the TKr Bill of Materials (BOM) file from 'projects-stg.registry.vmware.com/tkg/tkr-bom:v1.21.2_vmware.1-tkg.1-zshippable' BOM file "/Users/jmcbride/.config/tanzu/tkg/bom/tkr-bom-v1.21.2+vmware.1-tkg.1-zshippable.yaml" already exist, so skipped saving the downloaded BOM file loading cluster config file at cluster config file not provided using default config file at '/Users/jmcbride/.config/tanzu/tkg/cluster-config.yaml' loaded coreprovider: cluster-api:v0.3.23, bootstrapprovider: kubeadm:v0.3.23, and cp-provider: kubeadm:v0.3.23 CEIP Opt-in status: true timeout duration of at least 15 minutes is required, using default timeout 30m0s Validating the pre-requisites... Identity Provider not configured. Some authentication features won't work. Setting up standalone cluster... Validating configuration... Using infrastructure provider docker:v0.3.23 Generating cluster configuration... Setting up bootstrapper... Fetching configuration for kind node image... kindConfig: &{{Cluster kind.x-k8s.io/v1alpha4} [{ map[] [{/var/run/docker.sock /var/run/docker.sock false false }] [] [] []}] { 0 100.96.0.0/11 100.64.0.0/13 false } map[] map[] [apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration imageRepository: projects.registry.vmware.com/tkg etcd: local: imageRepository: projects.registry.vmware.com/tkg imageTag: v3.4.13_vmware.15 dns: type: CoreDNS imageRepository: projects.registry.vmware.com/tkg imageTag: v1.8.0_vmware.5] [] [] []} Creating kind cluster: tkg-kind-c4n35n59481dtqqit4dg Creating cluster "tkg-kind-c4n35n59481dtqqit4dg" ... Ensuring node image (projects-stg.registry.vmware.com/tkg/kind/node:v1.21.2_vmware.1) ... Pulling image: projects-stg.registry.vmware.com/tkg/kind/node:v1.21.2_vmware.1 ... Preparing nodes ... Writing configuration ... Using the following kubeadm config for node tkg-kind-c4n35n59481dtqqit4dg-control-plane:apiServer: certSANs: - localhost - 127.0.0.1 extraArgs: runtime-config: ""apiVersion: kubeadm.k8s.io/v1beta2clusterName: tkg-kind-c4n35n59481dtqqit4dgcontrolPlaneEndpoint: tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443controllerManager: extraArgs: enable-hostpath-provisioner: "true"dns: imageRepository: projects.registry.vmware.com/tkg imageTag: v1.8.0_vmware.5 type: CoreDNSetcd: local: imageRepository: projects.registry.vmware.com/tkg imageTag: v3.4.13_vmware.15imageRepository: projects.registry.vmware.com/tkgkind: ClusterConfigurationkubernetesVersion: v1.21.2+vmware.1networking: podSubnet: 100.96.0.0/11 serviceSubnet: 100.64.0.0/13scheduler: extraArgs: null---apiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- token: abcdef.0123456789abcdefkind: InitConfigurationlocalAPIEndpoint: advertiseAddress: 172.18.0.2 bindPort: 6443nodeRegistration: criSocket: unix:///run/containerd/containerd.sock kubeletExtraArgs: fail-swap-on: "false" node-ip: 172.18.0.2 node-labels: "" provider-id: kind://docker/tkg-kind-c4n35n59481dtqqit4dg/tkg-kind-c4n35n59481dtqqit4dg-control-plane---apiVersion: kubeadm.k8s.io/v1beta2controlPlane: localAPIEndpoint: advertiseAddress: 172.18.0.2 bindPort: 6443discovery: bootstrapToken: apiServerEndpoint: tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443 token: abcdef.0123456789abcdef unsafeSkipCAVerification: truekind: JoinConfigurationnodeRegistration: criSocket: unix:///run/containerd/containerd.sock kubeletExtraArgs: fail-swap-on: "false" node-ip: 172.18.0.2 node-labels: "" provider-id: kind://docker/tkg-kind-c4n35n59481dtqqit4dg/tkg-kind-c4n35n59481dtqqit4dg-control-plane---apiVersion: kubelet.config.k8s.io/v1beta1cgroupDriver: cgroupfsevictionHard: imagefs.available: 0% nodefs.available: 0% nodefs.inodesFree: 0%imageGCHighThresholdPercent: 100kind: KubeletConfiguration---apiVersion: kubeproxy.config.k8s.io/v1alpha1conntrack: maxPerCore: 0iptables: minSyncPeriod: 1skind: KubeProxyConfigurationmode: iptables Starting control-plane ... I0831 13:48:04.052139 201 initconfiguration.go:246] loading configuration from "/kind/kubeadm.conf"[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfigurationI0831 13:48:04.063434 201 certs.go:110] creating a new certificate authority for ca[init] Using Kubernetes version: v1.21.2+vmware.1[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and keyI0831 13:48:04.297472 201 certs.go:487] validating certificate period for ca certificate[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost tkg-kind-c4n35n59481dtqqit4dg-control-plane] and IPs [100.64.0.1 172.18.0.2 127.0.0.1][certs] Generating "apiserver-kubelet-client" certificate and keyI0831 13:48:04.965162 201 certs.go:110] creating a new certificate authority for front-proxy-ca[certs] Generating "front-proxy-ca" certificate and keyI0831 13:48:05.700441 201 certs.go:487] validating certificate period for front-proxy-ca certificate[certs] Generating "front-proxy-client" certificate and keyI0831 13:48:05.871746 201 certs.go:110] creating a new certificate authority for etcd-ca[certs] Generating "etcd/ca" certificate and keyI0831 13:48:06.128019 201 certs.go:487] validating certificate period for etcd/ca certificate[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost tkg-kind-c4n35n59481dtqqit4dg-control-plane] and IPs [172.18.0.2 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost tkg-kind-c4n35n59481dtqqit4dg-control-plane] and IPs [172.18.0.2 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and keyI0831 13:48:06.718504 201 certs.go:76] creating new public/private key files for signing service account users[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"I0831 13:48:07.069370 201 kubeconfig.go:101] creating kubeconfig file for admin.conf[kubeconfig] Writing "admin.conf" kubeconfig fileI0831 13:48:07.426719 201 kubeconfig.go:101] creating kubeconfig file for kubelet.conf[kubeconfig] Writing "kubelet.conf" kubeconfig fileI0831 13:48:07.748620 201 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf[kubeconfig] Writing "controller-manager.conf" kubeconfig fileI0831 13:48:07.987839 201 kubeconfig.go:101] creating kubeconfig file for scheduler.conf[kubeconfig] Writing "scheduler.conf" kubeconfig fileI0831 13:48:08.117344 201 kubelet.go:63] Stopping the kubelet[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"I0831 13:48:08.209592 201 manifests.go:96] [control-plane] getting StaticPodSpecsI0831 13:48:08.210382 201 certs.go:487] validating certificate period for CA certificateI0831 13:48:08.210509 201 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver"I0831 13:48:08.210541 201 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"I0831 13:48:08.210545 201 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"I0831 13:48:08.210694 201 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"I0831 13:48:08.210706 201 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"I0831 13:48:08.216447 201 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"I0831 13:48:08.216493 201 manifests.go:96] [control-plane] getting StaticPodSpecs[control-plane] Creating static Pod manifest for "kube-controller-manager"I0831 13:48:08.217013 201 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"I0831 13:48:08.217049 201 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"I0831 13:48:08.217054 201 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"I0831 13:48:08.217059 201 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"I0831 13:48:08.217062 201 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"I0831 13:48:08.217064 201 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"I0831 13:48:08.217067 201 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"I0831 13:48:08.217952 201 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"I0831 13:48:08.217994 201 manifests.go:96] [control-plane] getting StaticPodSpecsI0831 13:48:08.218501 201 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"I0831 13:48:08.219206 201 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"I0831 13:48:08.220416 201 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"I0831 13:48:08.220465 201 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthyI0831 13:48:08.221461 201 loader.go:372] Config loaded from file: /etc/kubernetes/admin.conf[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0sI0831 13:48:08.226868 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 3 millisecondsI0831 13:48:08.729884 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:48:09.230524 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:48:09.708873 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:48:10.211297 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 4 millisecondsI0831 13:48:10.710520 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 3 millisecondsI0831 13:48:11.208138 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:48:11.710270 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:48:12.207335 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:48:12.707389 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:48:13.208036 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:48:13.707662 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:48:14.207223 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:48:14.708030 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 2 millisecondsI0831 13:48:15.207046 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s in 1 millisecondsI0831 13:48:20.692076 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 4986 millisecondsI0831 13:48:20.720043 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 13 millisecondsI0831 13:48:21.207756 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 millisecondsI0831 13:48:21.707712 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds[apiclient] All control plane components are healthy after 14.008954 secondsI0831 13:48:22.210037 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/healthz?timeout=10s 200 OK in 3 millisecondsI0831 13:48:22.211070 201 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" NamespaceI0831 13:48:22.217347 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 3 millisecondsI0831 13:48:22.221354 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 3 millisecondsI0831 13:48:22.225690 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 3 millisecondsI0831 13:48:22.226555 201 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the clusterI0831 13:48:22.230651 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 3 millisecondsI0831 13:48:22.233470 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 2 millisecondsI0831 13:48:22.236983 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 3 millisecondsI0831 13:48:22.237365 201 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane nodeI0831 13:48:22.237448 201 patchnode.go:30] [patchnode] Uploading the CRI Socket information "unix:///run/containerd/containerd.sock" to the Node API object "tkg-kind-c4n35n59481dtqqit4dg-control-plane" as an annotationI0831 13:48:22.741170 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/nodes/tkg-kind-c4n35n59481dtqqit4dg-control-plane?timeout=10s 200 OK in 3 millisecondsI0831 13:48:22.747321 201 round_trippers.go:454] PATCH https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/nodes/tkg-kind-c4n35n59481dtqqit4dg-control-plane?timeout=10s 200 OK in 3 milliseconds[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node tkg-kind-c4n35n59481dtqqit4dg-control-plane as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node tkg-kind-c4n35n59481dtqqit4dg-control-plane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]I0831 13:48:23.250920 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/nodes/tkg-kind-c4n35n59481dtqqit4dg-control-plane?timeout=10s 200 OK in 2 millisecondsI0831 13:48:23.255598 201 round_trippers.go:454] PATCH https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/nodes/tkg-kind-c4n35n59481dtqqit4dg-control-plane?timeout=10s 200 OK in 3 milliseconds[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC RolesI0831 13:48:23.258194 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/namespaces/kube-system/secrets/bootstrap-token-abcdef?timeout=10s 404 Not Found in 1 millisecondsI0831 13:48:23.264924 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/namespaces/kube-system/secrets?timeout=10s 201 Created in 4 milliseconds[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodesI0831 13:48:23.269438 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 3 millisecondsI0831 13:48:23.273412 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 3 milliseconds[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentialsI0831 13:48:23.276565 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap TokenI0831 13:48:23.278932 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 milliseconds[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the clusterI0831 13:48:23.281182 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 1 millisecondsI0831 13:48:23.281412 201 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespaceI0831 13:48:23.282097 201 loader.go:372] Config loaded from file: /etc/kubernetes/admin.confI0831 13:48:23.282134 201 clusterinfo.go:56] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfigI0831 13:48:23.282454 201 clusterinfo.go:68] [bootstrap-token] creating/updating ConfigMap in kube-public namespaceI0831 13:48:23.285017 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/namespaces/kube-public/configmaps?timeout=10s 201 Created in 2 millisecondsI0831 13:48:23.285211 201 clusterinfo.go:82] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespaceI0831 13:48:23.287658 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles?timeout=10s 201 Created in 1 millisecondsI0831 13:48:23.290005 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings?timeout=10s 201 Created in 2 millisecondsI0831 13:48:23.290294 201 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and keyI0831 13:48:23.291277 201 loader.go:372] Config loaded from file: /etc/kubernetes/kubelet.confI0831 13:48:23.291734 201 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotationI0831 13:48:23.408995 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns 200 OK in 2 millisecondsI0831 13:48:23.414967 201 round_trippers.go:454] GET https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s 404 Not Found in 1 millisecondsI0831 13:48:23.417939 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 millisecondsI0831 13:48:23.420882 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 2 millisecondsI0831 13:48:23.423609 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 millisecondsI0831 13:48:23.427276 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 2 millisecondsI0831 13:48:23.440387 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?timeout=10s 201 Created in 7 millisecondsI0831 13:48:23.449225 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/namespaces/kube-system/services?timeout=10s 201 Created in 4 milliseconds[addons] Applied essential addon: CoreDNSI0831 13:48:23.452284 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s 201 Created in 2 millisecondsI0831 13:48:23.455843 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 millisecondsI0831 13:48:23.466460 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/apps/v1/namespaces/kube-system/daemonsets?timeout=10s 201 Created in 6 millisecondsI0831 13:48:23.469310 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s 201 Created in 2 millisecondsI0831 13:48:23.472315 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 2 millisecondsI0831 13:48:23.666984 201 request.go:600] Waited for 194.315569ms due to client-side throttling, not priority and fairness, request: POST:https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10sI0831 13:48:23.669837 201 round_trippers.go:454] POST https://tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s 201 Created in 2 milliseconds[addons] Applied essential addon: kube-proxyI0831 13:48:23.670421 201 loader.go:372] Config loaded from file: /etc/kubernetes/admin.confI0831 13:48:23.671108 201 loader.go:372] Config loaded from file: /etc/kubernetes/admin.confYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authoritiesand service account keys on each node and then running the following as root: kubeadm join tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443 --token \--discovery-token-ca-cert-hash sha256:5b4694e7a92757af0abaf2a38c38dd893d7d04eef2e454fe740faacaea35bb45 \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join tkg-kind-c4n35n59481dtqqit4dg-control-plane:6443 --token \--discovery-token-ca-cert-hash sha256:5b4694e7a92757af0abaf2a38c38dd893d7d04eef2e454fe740faacaea35bb45 Installing CNI ... Installing StorageClass ... Waiting 2m0s for control-plane = Ready ... Ready after 27s Bootstrapper created. Kubeconfig: /Users/jmcbride/.kube-tkg/tmp/config_bodtTtbF Installing providers on bootstrapper... installed Component=="cluster-api" Type=="CoreProvider" Version=="v0.3.23" installed Component=="kubeadm" Type=="BootstrapProvider" Version=="v0.3.23" installed Component=="kubeadm" Type=="ControlPlaneProvider" Version=="v0.3.23" installed Component=="docker" Type=="InfrastructureProvider" Version=="v0.3.23" Waiting for provider control-plane-kubeadm Waiting for provider infrastructure-docker Waiting for provider bootstrap-kubeadm Waiting for provider cluster-api Waiting for resource capd-controller-manager of type *v1.Deployment to be up and running Waiting for resource capi-kubeadm-control-plane-controller-manager of type *v1.Deployment to be up and running pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying Waiting for resource capi-kubeadm-bootstrap-controller-manager of type *v1.Deployment to be up and running pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying Waiting for resource capi-controller-manager of type *v1.Deployment to be up and running pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying Waiting for resource capi-kubeadm-control-plane-controller-manager of type *v1.Deployment to be up and running Passed waiting on provider control-plane-kubeadm after 15.07945518s Waiting for resource capi-kubeadm-bootstrap-controller-manager of type *v1.Deployment to be up and running Passed waiting on provider bootstrap-kubeadm after 15.149574107s Waiting for resource capi-controller-manager of type *v1.Deployment to be up and running Passed waiting on provider cluster-api after 15.179617039s pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying pods are not yet running for deployment 'capd-controller-manager' in namespace 'capd-system', retrying Passed waiting on provider infrastructure-docker after 30.066577439s Success waiting on all providers. Start creating standalone cluster... patch cluster object with operation status: { "metadata": { "annotations": { "TKGOperationInfo" : "{\"Operation\":\"Create\",\"OperationStartTimestamp\":\"2021-08-31 13:50:14.798832 +0000 UTC\",\"OperationTimeout\":1800}", "TKGOperationLastObservedTimestamp" : "2021-08-31 13:50:14.798832 +0000 UTC" } } } regionContext: {johns-test kind-tkg-kind-c4n35n59481dtqqit4dg /Users/jmcbride/.kube-tkg/tmp/config_bodtTtbF Failed false}cluster state is unchanged 1 [cluster control plane is still being initialized, cluster infrastructure is still being provisioned], retrying cluster control plane is still being initialized, retrying cluster state is unchanged 1 cluster control plane is still being initialized, retrying cluster state is unchanged 2 cluster control plane is still being initialized, retrying cluster state is unchanged 3 cluster control plane is still being initialized, retrying Getting secret for cluster Waiting for resource johns-test-kubeconfig of type *v1.Secret to be up and running Mac and CAPD environment detected, fixing Kubeconfig Saving standalone cluster kubeconfig into /Users/jmcbride/.kube/config Waiting for bootstrap cluster to get ready for save ... Waiting for resource johns-test of type *v1alpha3.Cluster to be up and running Waiting for resources type *v1alpha3.MachineDeploymentList to be up and running worker nodes are still being created for MachineDeployment 'johns-test-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'johns-test-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'johns-test-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'johns-test-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'johns-test-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'johns-test-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'johns-test-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'johns-test-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'johns-test-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying worker nodes are still being created for MachineDeployment 'johns-test-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying Waiting for resources type *v1alpha3.MachineList to be up and running Waiting for addons installation... Waiting for resources type *v1alpha3.ClusterResourceSetList to be up and running Waiting for resource antrea-controller of type *v1.Deployment to be up and running pods are not yet running for deployment 'antrea-controller' in namespace 'kube-system', retrying pods are not yet running for deployment 'antrea-controller' in namespace 'kube-system', retrying Moving all Cluster API objects from bootstrap cluster to standalone cluster... Context set for standalone cluster johns-test as 'johns-test-admin@johns-test'. Deleting kind cluster: tkg-kind-c4n35n59481dtqqit4dg Standalone cluster created! tanzu standalone-cluster create johns-test -i docker 31.86s user 13.78s system 10% cpu 7:34.83 total ```

Specs

Kernel

❯ uname -a
Darwin jmcbride-a01.vmware.com 20.6.0 Darwin Kernel Version 20.6.0: Wed Jun 23 00:26:31 PDT 2021; root:xnu-7195.141.2~5/RELEASE_X86_64 x86_64

CPU

❯ sysctl -a | grep machdep.cpu
machdep.cpu.xsave.extended_state: 31 832 1088 0
machdep.cpu.xsave.extended_state1: 15 832 256 0
machdep.cpu.tlb.data.small: 64
machdep.cpu.tlb.data.small_level1: 64
machdep.cpu.tlb.inst.large: 8
machdep.cpu.thermal.ACNT_MCNT: 1
machdep.cpu.thermal.core_power_limits: 1
machdep.cpu.thermal.dynamic_acceleration: 1
machdep.cpu.thermal.energy_policy: 1
machdep.cpu.thermal.fine_grain_clock_mod: 1
machdep.cpu.thermal.hardware_feedback: 0
machdep.cpu.thermal.invariant_APIC_timer: 1
machdep.cpu.thermal.package_thermal_intr: 1
machdep.cpu.thermal.sensor: 1
machdep.cpu.thermal.thresholds: 2
machdep.cpu.mwait.extensions: 3
machdep.cpu.mwait.linesize_max: 64
machdep.cpu.mwait.linesize_min: 64
machdep.cpu.mwait.sub_Cstates: 286531872
machdep.cpu.cache.L2_associativity: 4
machdep.cpu.cache.linesize: 64
machdep.cpu.cache.size: 256
machdep.cpu.arch_perf.events: 0
machdep.cpu.arch_perf.events_number: 7
machdep.cpu.arch_perf.fixed_number: 3
machdep.cpu.arch_perf.fixed_width: 48
machdep.cpu.arch_perf.number: 4
machdep.cpu.arch_perf.version: 4
machdep.cpu.arch_perf.width: 48
machdep.cpu.address_bits.physical: 39
machdep.cpu.address_bits.virtual: 48
machdep.cpu.tsc_ccc.denominator: 2
machdep.cpu.tsc_ccc.numerator: 226
machdep.cpu.brand: 0
machdep.cpu.brand_string: Intel(R) Core(TM) i7-8559U CPU @ 2.70GHz
machdep.cpu.core_count: 4
machdep.cpu.cores_per_package: 8
machdep.cpu.extfamily: 0
machdep.cpu.extfeature_bits: 1241984796928
machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT PREFETCHW RDTSCP TSCI
machdep.cpu.extmodel: 8
machdep.cpu.family: 6
machdep.cpu.feature_bits: 9221959987971750911
machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C
machdep.cpu.leaf7_feature_bits: 43804591 0
machdep.cpu.leaf7_feature_bits_edx: 2617257472
machdep.cpu.leaf7_features: RDWRFSGS TSC_THREAD_OFFSET SGX BMI1 AVX2 SMEP BMI2 ERMS INVPCID FPU_CSDS MPX RDSEED ADX SMAP CLFSOPT IPT MDCLEAR TSXFA IBRS STIBP L1DF SSBD
machdep.cpu.logical_per_package: 16
machdep.cpu.max_basic: 22
machdep.cpu.max_ext: 2147483656
machdep.cpu.microcode_version: 234
machdep.cpu.model: 142
machdep.cpu.processor_flag: 6
machdep.cpu.signature: 526058
machdep.cpu.stepping: 10
machdep.cpu.thread_count: 8
machdep.cpu.vendor: GenuineIntel

Memory

❯ top -l 1 -s 0 | grep PhysMem
PhysMem: 15G used (3486M wired), 662M unused.

Docker info

❯ docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Build with BuildKit (Docker Inc., v0.6.1-docker)
  compose: Docker Compose (Docker Inc., v2.0.0-rc.1)
  scan: Docker Scan (Docker Inc., v0.8.0)

Server:
 Containers: 3
  Running: 3
  Paused: 0
  Stopped: 0
 Images: 2
 Server Version: 20.10.8
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e25210fe30a0a703442421b0f60afac609f950a3
 runc version: v1.0.1-0-g4144b63
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.10.47-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 6
 Total Memory: 11.7GiB
 Name: docker-desktop
 ID: ALQA:M5GV:ELQH:EOHL:2NSY:LTP4:QRDA:IUKA:XHKN:OYHH:PUR6:NZBW
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 69
  Goroutines: 62
  System Time: 2021-08-31T14:03:11.338213662Z
  EventsListeners: 4
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
karuppiah7890 commented 3 years ago

I'll try it out again and get some logs from the controllers in bootstrap cluster. Looks like we are claiming it to be a resource issue / more of a "works on my machine" thing?

karuppiah7890 commented 3 years ago

I have been noticing this (more time taken) many times now in my machine, I never checked the exact time though, since I always left the thing to run in the background and was always more into trying other platforms for the automation. I'm not sure if this happens with just v0.7.0 since I can't recall now or check my logs as they don't have TCE version in the logs. But I can try to dig up the issue and post here. @ShwethaKumbla too has been noticing this but she mentioned v0.6.0 worked better and faster, so I assumed it's just v0.7.0

But if we don't get much data, I can close the issue 👍

jpmcb commented 3 years ago

Operating System (client): MacOS ... / # uname -a Linux docker-desktop 5.10.47-linuxkit #1 SMP Sat Jul 3 21:51:47 UTC 2021 x86_64 Linux

Hmmm just curious, something doesn't seem right here - i'd expect an Apple Darwin kernel, not a linux one. If I had to guess, this is from within some docker desktop instance?

karuppiah7890 commented 3 years ago

Yes. This is from within the docker desktop instance. https://www.krenger.ch/blog/docker-desktop-for-mac-ssh-into-the-docker-vm/

joshrosso commented 3 years ago

Due to the variability in an arbitrary user's desktop, we'd need tests involving timing or performance to be run on a freshly-provisioned, consistent workstation.

Can we provision an instance in AWS to get a more objective number?

https://aws.amazon.com/ec2/instance-types/mac/

karuppiah7890 commented 3 years ago

The Docker containers run on the Docker Desktop VM so I got that data. I don't think my machine info is needed?

ShwethaKumbla commented 3 years ago

Even I have good amount of resources in docker. image

I am also observing more than 25 minutes for provisioning standalone cluster.

jpmcb commented 3 years ago

All the necessary docker desktop VM information would be included in docker info.

The specs for the host machine are useful since they are the underlying resources utilized by docker desktop.

joshrosso commented 3 years ago

It's hard to say what could be going wrong on a desktop where there are limitless things that could impact performance.

If we believe there is a timing or performance issue, let's get machines provisioned in a consistent way, such that data and troubleshooting is not a rabbit hole.

karuppiah7890 commented 3 years ago

Sure, I can try that @joshrosso . I'll try it when I get to it. This was more of a "worker node get deleted and recreated" issue mainly but I think I'll spend time on digging it and posting the data here. I'll change the issue name too

joshrosso commented 3 years ago

Thanks 👍

jorgemoralespou commented 3 years ago

Also, network speed affects heavily on the process, so it would be good if you can provide those metrics as well.

karuppiah7890 commented 3 years ago

I'll close this isse for now. I'll reopen the issue when I work on this and get some data and notice something other than resource-constraint issue