Open tom-diacono opened 2 years ago
I'm trying to follow the ubuntu setup steps to get EKS anywhere to work on WSL2 but keep hitting an issue with cert-manager.
As far as I've found, there's nothing to explicitly say that WSL2 is or isn't supported- so apologies if this is known/not meant to even work!
The output with high verbosity is as follows (didn't know how high it would go hence just throwing 10 at it).
$ eksctl anywhere -v 10 create cluster -f $CLUSTER_NAME.yaml 2022-09-05T21:48:05.835+0100 V4 Logger init completed {"vlevel": 10} 2022-09-05T21:48:05.835+0100 V6 Executing command {"cmd": "/usr/bin/docker version --format {{.Client.Version}}"} 2022-09-05T21:48:05.893+0100 V6 Executing command {"cmd": "/usr/bin/docker info --format '{{json .MemTotal}}'"} 2022-09-05T21:48:06.039+0100 V4 Reading bundles manifest {"url": "https://anywhere-assets.eks.amazonaws.com/releases/bundles/16/manifest.yaml"} 2022-09-05T21:48:06.093+0100 V2 Pulling docker image {"image": "public.ecr.aws/eks-anywhere/cli-tools:v0.11.2-eks-a-16"} 2022-09-05T21:48:06.093+0100 V6 Executing command {"cmd": "/usr/bin/docker pull public.ecr.aws/eks-anywhere/cli-tools:v0.11.2-eks-a-16"} 2022-09-05T21:48:17.299+0100 V5 Retry execution successful {"retries": 1, "duration": "11.205992s"} 2022-09-05T21:48:17.299+0100 V3 Initializing long running container {"name": "eksa_1662410886093076000", "image": "public.ecr.aws/eks-anywhere/cli-tools:v0.11.2-eks-a-16"} 2022-09-05T21:48:17.299+0100 V6 Executing command {"cmd": "/usr/bin/docker run -d --name eksa_1662410886093076000 --network host -w /home/tom/projects/infrastructure -v /var/run/docker.sock:/var/run/docker.sock -v /home/tom/projects/infrastructure:/home/tom/projects/infrastructure --entrypoint sleep public.ecr.aws/eks-anywhere/cli-tools:v0.11.2-eks-a-16 infinity"} 2022-09-05T21:48:17.635+0100 V4 Task start {"task_name": "setup-validate"} 2022-09-05T21:48:17.635+0100 V0 Performing setup and validations 2022-09-05T21:48:17.635+0100 V0 Warning: The docker infrastructure provider is meant for local development and testing only 2022-09-05T21:48:17.635+0100 V0 ✅ Docker Provider setup is valid 2022-09-05T21:48:17.635+0100 V0 ✅ Validate certificate for registry mirror 2022-09-05T21:48:17.635+0100 V0 ✅ Validate authentication for git provider 2022-09-05T21:48:17.635+0100 V0 ✅ Create preflight validations pass 2022-09-05T21:48:17.635+0100 V4 Task finished {"task_name": "setup-validate", "duration": "151µs"} 2022-09-05T21:48:17.635+0100 V4 ---------------------------------- 2022-09-05T21:48:17.635+0100 V4 Task start {"task_name": "bootstrap-cluster-init"} 2022-09-05T21:48:17.635+0100 V0 Creating new bootstrap cluster 2022-09-05T21:48:17.636+0100 V4 Creating kind cluster {"name": "new-test-cluster-eks-a-cluster", "kubeconfig": "new-test-cluster/generated/new-test-cluster.kind.kubeconfig"} 2022-09-05T21:48:17.636+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kind create cluster --name new-test-cluster-eks-a-cluster --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --image public.ecr.aws/eks-anywhere/kubernetes-sigs/kind/node:v1.23.7-eks-d-1-23-4-eks-a-16 --config new-test-cluster/generated/kind_tmp.yaml"} 2022-09-05T21:49:18.484+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get namespace eksa-system --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig"} 2022-09-05T21:49:18.711+0100 V9 docker {"stderr": "Error from server (NotFound): namespaces \"eksa-system\" not found\n"} 2022-09-05T21:49:18.711+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl create namespace eksa-system --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig"} 2022-09-05T21:49:18.838+0100 V0 Provider specific pre-capi-install-setup on bootstrap cluster 2022-09-05T21:49:18.838+0100 V0 Installing cluster-api providers on bootstrap cluster 2022-09-05T21:49:23.223+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 clusterctl init --core cluster-api:v1.2.0+172e2ab --bootstrap kubeadm:v1.2.0+115b0d5 --control-plane kubeadm:v1.2.0+525dbd6 --infrastructure docker:v1.2.0+12e373a --config new-test-cluster/generated/clusterctl_tmp.yaml --bootstrap etcdadm-bootstrap:v1.0.5+77e4d45 --bootstrap etcdadm-controller:v1.0.4+05b4294 --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig"} 2022-09-05T21:49:52.261+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl wait --timeout 30m --for=condition=Available deployments/capi-kubeadm-control-plane-controller-manager --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig -n capi-kubeadm-control-plane-system"} 2022-09-05T21:50:02.465+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl wait --timeout 30m --for=condition=Available deployments/capi-controller-manager --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig -n capi-system"} 2022-09-05T21:50:02.602+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl wait --timeout 30m --for=condition=Available deployments/cert-manager --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig -n cert-manager"} 2022-09-05T21:50:02.735+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl wait --timeout 30m --for=condition=Available deployments/cert-manager-cainjector --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig -n cert-manager"} 2022-09-05T21:50:02.891+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl wait --timeout 30m --for=condition=Available deployments/cert-manager-webhook --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig -n cert-manager"} 2022-09-05T21:50:03.074+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl wait --timeout 30m --for=condition=Available deployments/capi-kubeadm-bootstrap-controller-manager --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig -n capi-kubeadm-bootstrap-system"} 2022-09-05T21:50:03.236+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl wait --timeout 30m --for=condition=Available deployments/etcdadm-controller-controller-manager --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig -n etcdadm-controller-system"} 2022-09-05T21:50:11.486+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl wait --timeout 30m --for=condition=Available deployments/etcdadm-bootstrap-provider-controller-manager --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig -n etcdadm-bootstrap-provider-system"} 2022-09-05T21:50:11.620+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl wait --timeout 30m --for=condition=Available deployments/capd-controller-manager --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig -n capd-system"} 2022-09-05T21:50:11.748+0100 V0 Provider specific post-setup 2022-09-05T21:50:11.748+0100 V4 Task finished {"task_name": "bootstrap-cluster-init", "duration": "1m54.1125469s"} 2022-09-05T21:50:11.748+0100 V4 ---------------------------------- 2022-09-05T21:50:11.748+0100 V4 Task start {"task_name": "workload-cluster-init"} 2022-09-05T21:50:11.748+0100 V0 Creating new workload cluster 2022-09-05T21:50:11.748+0100 V5 Adding extraArgs {"cipher-suites": "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"} 2022-09-05T21:50:11.748+0100 V5 Adding extraArgs {"tls-cipher-suites": "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"} 2022-09-05T21:50:11.748+0100 V5 Adding extraArgs {"tls-cipher-suites": "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"} 2022-09-05T21:50:11.748+0100 V5 Adding extraArgs {"tls-cipher-suites": "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"} 2022-09-05T21:50:11.749+0100 V5 Adding extraArgs {"tls-cipher-suites": "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"} 2022-09-05T21:50:11.750+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl apply -f - --namespace eksa-system --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig"} 2022-09-05T21:50:13.021+0100 V5 Retry execution successful {"retries": 1, "duration": "1.2708437s"} 2022-09-05T21:50:13.021+0100 V3 Waiting for external etcd to be ready {"cluster": "new-test-cluster"} 2022-09-05T21:50:13.021+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl wait --timeout 60m --for=condition=ManagedEtcdReady clusters.cluster.x-k8s.io/new-test-cluster --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig -n eksa-system"} 2022-09-05T21:50:24.784+0100 V3 External etcd is ready 2022-09-05T21:50:24.784+0100 V3 Waiting for control plane to be ready 2022-09-05T21:50:24.785+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl wait --timeout 60m --for=condition=ControlPlaneReady clusters.cluster.x-k8s.io/new-test-cluster --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig -n eksa-system"} 2022-09-05T21:51:06.489+0100 V3 Waiting for workload kubeconfig generation {"cluster": "new-test-cluster"} 2022-09-05T21:51:06.490+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 clusterctl get kubeconfig new-test-cluster --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --namespace eksa-system"} 2022-09-05T21:51:06.665+0100 V6 Executing command {"cmd": "/usr/bin/docker port new-test-cluster-lb 6443/tcp"} 2022-09-05T21:51:06.721+0100 V5 Retry execution successful {"retries": 1, "duration": "231.7612ms"} 2022-09-05T21:51:06.721+0100 V3 Waiting for controlplane and worker machines to be ready 2022-09-05T21:51:06.721+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:06.861+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:06.861+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:07.017+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:07.017+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 1} 2022-09-05T21:51:07.017+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:08.017+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:08.154+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:08.154+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 2} 2022-09-05T21:51:08.154+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:09.154+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:09.290+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:09.290+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 3} 2022-09-05T21:51:09.290+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:10.290+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:10.428+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:10.428+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 4} 2022-09-05T21:51:10.429+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:11.429+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:11.577+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:11.577+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 5} 2022-09-05T21:51:11.577+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:12.577+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:12.713+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:12.713+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 6} 2022-09-05T21:51:12.713+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:13.713+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:13.847+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:13.847+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 7} 2022-09-05T21:51:13.847+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:14.847+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:14.985+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:14.985+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 8} 2022-09-05T21:51:14.985+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:15.985+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:16.121+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:16.121+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 9} 2022-09-05T21:51:16.121+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:17.122+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:17.271+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:17.271+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 10} 2022-09-05T21:51:17.271+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:18.271+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:18.424+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:18.424+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 11} 2022-09-05T21:51:18.424+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:19.425+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:19.564+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:19.564+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 12} 2022-09-05T21:51:19.564+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:20.565+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:20.715+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:20.715+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 13} 2022-09-05T21:51:20.715+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:21.716+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:21.859+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:21.859+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 14} 2022-09-05T21:51:21.859+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:22.860+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:23.005+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:23.005+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 15} 2022-09-05T21:51:23.005+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:24.005+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:24.145+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:24.145+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 16} 2022-09-05T21:51:24.145+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:25.145+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:25.281+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:25.281+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 17} 2022-09-05T21:51:25.281+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:26.281+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:26.415+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:26.415+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 18} 2022-09-05T21:51:26.415+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:27.415+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:27.551+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:27.551+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 19} 2022-09-05T21:51:27.551+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:28.551+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:28.688+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:28.688+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 20} 2022-09-05T21:51:28.689+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:29.689+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:29.823+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:29.823+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 21} 2022-09-05T21:51:29.824+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:30.824+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:30.957+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:30.957+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 22} 2022-09-05T21:51:30.957+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:31.957+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:32.095+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:32.095+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 23} 2022-09-05T21:51:32.095+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:33.095+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:33.252+0100 V4 Nodes are not ready yet {"total": 2, "ready": 1, "cluster name": "new-test-cluster"} 2022-09-05T21:51:33.252+0100 V5 Error happened during retry {"error": "nodes are not ready yet", "retries": 24} 2022-09-05T21:51:33.252+0100 V5 Sleeping before next retry {"time": "1s"} 2022-09-05T21:51:34.254+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get machines -o json --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --selector=cluster.x-k8s.io/cluster-name=new-test-cluster --namespace eksa-system"} 2022-09-05T21:51:34.390+0100 V4 Nodes ready {"total": 2} 2022-09-05T21:51:34.390+0100 V5 Retry execution successful {"retries": 25, "duration": "27.5288965s"} 2022-09-05T21:51:34.390+0100 V0 Installing networking on workload cluster 2022-09-05T21:51:34.391+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i -e HELM_EXPERIMENTAL_OCI=1 -e HTTPS_PROXY= -e HTTP_PROXY= -e NO_PROXY= eksa_1662410886093076000 helm template oci://public.ecr.aws/isovalent/cilium --version 1.10.14-eksa.1 --namespace kube-system --kube-version 1.23 --insecure-skip-tls-verify -f -"} 2022-09-05T21:51:35.442+0100 V5 Retry execution successful {"retries": 1, "duration": "1.0519189s"} 2022-09-05T21:51:35.442+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl apply -f - --kubeconfig new-test-cluster/new-test-cluster-eks-a-cluster.kubeconfig"} 2022-09-05T21:51:36.216+0100 V5 Retry execution successful {"retries": 1, "duration": "773.6948ms"} 2022-09-05T21:51:36.216+0100 V0 Creating EKS-A namespace 2022-09-05T21:51:36.216+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl get namespace eksa-system --kubeconfig new-test-cluster/new-test-cluster-eks-a-cluster.kubeconfig"} 2022-09-05T21:51:36.364+0100 V9 docker {"stderr": "Error from server (NotFound): namespaces \"eksa-system\" not found\n"} 2022-09-05T21:51:36.364+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl create namespace eksa-system --kubeconfig new-test-cluster/new-test-cluster-eks-a-cluster.kubeconfig"} 2022-09-05T21:51:36.499+0100 V0 Installing cluster-api providers on workload cluster 2022-09-05T21:51:37.024+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 clusterctl init --core cluster-api:v1.2.0+172e2ab --bootstrap kubeadm:v1.2.0+115b0d5 --control-plane kubeadm:v1.2.0+525dbd6 --infrastructure docker:v1.2.0+12e373a --config new-test-cluster/generated/clusterctl_tmp.yaml --bootstrap etcdadm-bootstrap:v1.0.5+77e4d45 --bootstrap etcdadm-controller:v1.0.4+05b4294 --kubeconfig new-test-cluster/new-test-cluster-eks-a-cluster.kubeconfig"} 2022-09-05T22:21:42.604+0100 V9 docker {"stderr": "Fetching providers\nUsing Override=\"core-components.yaml\" Provider=\"cluster-api\" Version=\"v1.2.0+172e2ab\"\nUsing Override=\"bootstrap-components.yaml\" Provider=\"bootstrap-kubeadm\" Version=\"v1.2.0+115b0d5\"\nUsing Override=\"bootstrap-components.yaml\" Provider=\"bootstrap-etcdadm-bootstrap\" Version=\"v1.0.5+77e4d45\"\nUsing Override=\"bootstrap-components.yaml\" Provider=\"bootstrap-etcdadm-controller\" Version=\"v1.0.4+05b4294\"\nUsing Override=\"control-plane-components.yaml\" Provider=\"control-plane-kubeadm\" Version=\"v1.2.0+525dbd6\"\nUsing Override=\"infrastructure-components-development.yaml\" Provider=\"infrastructure-docker\" Version=\"v1.2.0+12e373a\"\nInstalling cert-manager Version=\"v1.8.2+543ab1d\"\nUsing Override=\"cert-manager.yaml\" Provider=\"cert-manager\" Version=\"v1.8.2+543ab1d\"\nWaiting for cert-manager to be available...\nError: timed out waiting for the condition\n"} 2022-09-05T22:21:42.604+0100 V4 Task finished {"task_name": "workload-cluster-init", "duration": "31m30.8563363s"} 2022-09-05T22:21:42.604+0100 V4 ---------------------------------- 2022-09-05T22:21:42.604+0100 V4 Task start {"task_name": "collect-cluster-diagnostics"} 2022-09-05T22:21:42.604+0100 V0 collecting cluster diagnostics 2022-09-05T22:21:42.604+0100 V0 collecting management cluster diagnostics 2022-09-05T22:21:42.608+0100 V3 bundle config written {"path": "new-test-cluster/generated/bootstrap-cluster-2022-09-05T22:21:42+01:00-bundle.yaml"} 2022-09-05T22:21:42.608+0100 V1 creating temporary namespace for diagnostic collector {"namespace": "eksa-diagnostics"} 2022-09-05T22:21:42.608+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl create namespace eksa-diagnostics --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig"} 2022-09-05T22:21:42.744+0100 V5 Retry execution successful {"retries": 1, "duration": "136.1822ms"} 2022-09-05T22:21:42.744+0100 V1 creating temporary ClusterRole and RoleBinding for diagnostic collector 2022-09-05T22:21:42.744+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl apply -f - --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig"} 2022-09-05T22:21:43.164+0100 V5 Retry execution successful {"retries": 1, "duration": "419.9068ms"} 2022-09-05T22:21:43.164+0100 V0 ⏳ Collecting support bundle from cluster, this can take a while {"cluster": "bootstrap-cluster", "bundle": "new-test-cluster/generated/bootstrap-cluster-2022-09-05T22:21:42+01:00-bundle.yaml", "since": "2022-09-05T19:21:42.608+0100", "kubeconfig": "new-test-cluster/generated/new-test-cluster.kind.kubeconfig"} 2022-09-05T22:21:43.164+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 support-bundle new-test-cluster/generated/bootstrap-cluster-2022-09-05T22:21:42+01:00-bundle.yaml --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig --interactive=false --since-time 2022-09-05T19:21:42.6084377+01:00"} 2022-09-05T22:22:39.705+0100 V0 Support bundle archive created {"path": "support-bundle-2022-09-05T21_21_43.tar.gz"} 2022-09-05T22:22:39.705+0100 V0 Analyzing support bundle {"bundle": "new-test-cluster/generated/bootstrap-cluster-2022-09-05T22:21:42+01:00-bundle.yaml", "archive": "support-bundle-2022-09-05T21_21_43.tar.gz"} 2022-09-05T22:22:39.705+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 support-bundle analyze new-test-cluster/generated/bootstrap-cluster-2022-09-05T22:21:42+01:00-bundle.yaml --bundle support-bundle-2022-09-05T21_21_43.tar.gz --output json"} 2022-09-05T22:22:39.954+0100 V0 Analysis output generated {"path": "new-test-cluster/generated/bootstrap-cluster-2022-09-05T22:22:39+01:00-analysis.yaml"} 2022-09-05T22:22:39.954+0100 V1 cleaning up temporary roles for diagnostic collectors 2022-09-05T22:22:39.954+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl delete -f - --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig"} 2022-09-05T22:22:40.094+0100 V5 Retry execution successful {"retries": 1, "duration": "140.0633ms"} 2022-09-05T22:22:40.095+0100 V1 cleaning up temporary namespace for diagnostic collectors {"namespace": "eksa-diagnostics"} 2022-09-05T22:22:40.095+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl delete namespace eksa-diagnostics --kubeconfig new-test-cluster/generated/new-test-cluster.kind.kubeconfig"} 2022-09-05T22:22:45.413+0100 V5 Retry execution successful {"retries": 1, "duration": "5.3181819s"} 2022-09-05T22:22:45.413+0100 V0 collecting workload cluster diagnostics 2022-09-05T22:22:45.415+0100 V3 bundle config written {"path": "new-test-cluster/generated/new-test-cluster-2022-09-05T22:22:45+01:00-bundle.yaml"} 2022-09-05T22:22:45.415+0100 V1 creating temporary namespace for diagnostic collector {"namespace": "eksa-diagnostics"} 2022-09-05T22:22:45.415+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl create namespace eksa-diagnostics --kubeconfig new-test-cluster/new-test-cluster-eks-a-cluster.kubeconfig"} 2022-09-05T22:22:45.552+0100 V5 Retry execution successful {"retries": 1, "duration": "136.7286ms"} 2022-09-05T22:22:45.552+0100 V1 creating temporary ClusterRole and RoleBinding for diagnostic collector 2022-09-05T22:22:45.552+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl apply -f - --kubeconfig new-test-cluster/new-test-cluster-eks-a-cluster.kubeconfig"} 2022-09-05T22:22:46.342+0100 V5 Retry execution successful {"retries": 1, "duration": "789.5724ms"} 2022-09-05T22:22:46.342+0100 V0 ⏳ Collecting support bundle from cluster, this can take a while {"cluster": "new-test-cluster", "bundle": "new-test-cluster/generated/new-test-cluster-2022-09-05T22:22:45+01:00-bundle.yaml", "since": "2022-09-05T19:22:45.415+0100", "kubeconfig": "new-test-cluster/new-test-cluster-eks-a-cluster.kubeconfig"} 2022-09-05T22:22:46.342+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 support-bundle new-test-cluster/generated/new-test-cluster-2022-09-05T22:22:45+01:00-bundle.yaml --kubeconfig new-test-cluster/new-test-cluster-eks-a-cluster.kubeconfig --interactive=false --since-time 2022-09-05T19:22:45.4158354+01:00"} 2022-09-05T22:23:21.162+0100 V0 Support bundle archive created {"path": "support-bundle-2022-09-05T21_22_46.tar.gz"} 2022-09-05T22:23:21.162+0100 V0 Analyzing support bundle {"bundle": "new-test-cluster/generated/new-test-cluster-2022-09-05T22:22:45+01:00-bundle.yaml", "archive": "support-bundle-2022-09-05T21_22_46.tar.gz"} 2022-09-05T22:23:21.162+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 support-bundle analyze new-test-cluster/generated/new-test-cluster-2022-09-05T22:22:45+01:00-bundle.yaml --bundle support-bundle-2022-09-05T21_22_46.tar.gz --output json"} 2022-09-05T22:23:21.381+0100 V0 Analysis output generated {"path": "new-test-cluster/generated/new-test-cluster-2022-09-05T22:23:21+01:00-analysis.yaml"} 2022-09-05T22:23:21.381+0100 V1 cleaning up temporary roles for diagnostic collectors 2022-09-05T22:23:21.381+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl delete -f - --kubeconfig new-test-cluster/new-test-cluster-eks-a-cluster.kubeconfig"} 2022-09-05T22:23:21.521+0100 V5 Retry execution successful {"retries": 1, "duration": "139.9445ms"} 2022-09-05T22:23:21.521+0100 V1 cleaning up temporary namespace for diagnostic collectors {"namespace": "eksa-diagnostics"} 2022-09-05T22:23:21.521+0100 V6 Executing command {"cmd": "/usr/bin/docker exec -i eksa_1662410886093076000 kubectl delete namespace eksa-diagnostics --kubeconfig new-test-cluster/new-test-cluster-eks-a-cluster.kubeconfig"} 2022-09-05T22:23:26.776+0100 V5 Retry execution successful {"retries": 1, "duration": "5.2545722s"} 2022-09-05T22:23:26.776+0100 V4 Task finished {"task_name": "collect-cluster-diagnostics", "duration": "1m44.1710923s"} 2022-09-05T22:23:26.776+0100 V4 ---------------------------------- 2022-09-05T22:23:26.776+0100 V4 Saving checkpoint {"file": "new-test-cluster-checkpoint.yaml"} 2022-09-05T22:23:26.776+0100 V4 Tasks completed {"duration": "35m9.1412249s"} 2022-09-05T22:23:26.776+0100 V3 Cleaning up long running container {"name": "eksa_1662410886093076000"} 2022-09-05T22:23:26.776+0100 V6 Executing command {"cmd": "/usr/bin/docker rm -f -v eksa_1662410886093076000"} Error: initializing capi resources in cluster: executing init: Fetching providers Using Override="core-components.yaml" Provider="cluster-api" Version="v1.2.0+172e2ab" Using Override="bootstrap-components.yaml" Provider="bootstrap-kubeadm" Version="v1.2.0+115b0d5" Using Override="bootstrap-components.yaml" Provider="bootstrap-etcdadm-bootstrap" Version="v1.0.5+77e4d45" Using Override="bootstrap-components.yaml" Provider="bootstrap-etcdadm-controller" Version="v1.0.4+05b4294" Using Override="control-plane-components.yaml" Provider="control-plane-kubeadm" Version="v1.2.0+525dbd6" Using Override="infrastructure-components-development.yaml" Provider="infrastructure-docker" Version="v1.2.0+12e373a" Installing cert-manager Version="v1.8.2+543ab1d" Using Override="cert-manager.yaml" Provider="cert-manager" Version="v1.8.2+543ab1d" Waiting for cert-manager to be available... Error: timed out waiting for the condition
I've also attached the support bundles in case anyone would find them useful.
support-bundle-2022-09-05T21_21_43.tar.gz support-bundle-2022-09-05T21_22_46.tar.gz
If anyone needs any more info then please shout.
TIA :)
@tom-diacono Thanks for submitting this. WSL2 is not currently supported. We will add it as a future feature request.
I'm trying to follow the ubuntu setup steps to get EKS anywhere to work on WSL2 but keep hitting an issue with cert-manager.
As far as I've found, there's nothing to explicitly say that WSL2 is or isn't supported- so apologies if this is known/not meant to even work!
The output with high verbosity is as follows (didn't know how high it would go hence just throwing 10 at it).
I've also attached the support bundles in case anyone would find them useful.
support-bundle-2022-09-05T21_21_43.tar.gz support-bundle-2022-09-05T21_22_46.tar.gz
If anyone needs any more info then please shout.
TIA :)