carlosroman / ansible-k8s-raspberry-playbook

Ansible playbook to setup a k8s cluster on the Raspberry PI
Apache License 2.0
13 stars 0 forks source link

Stuck at TASK [Kube init] #1

Open moritz31 opened 6 years ago

moritz31 commented 6 years ago

i'm currently stuck at TASK [Kube init] for about ~30 min when tried to install with hand i already found out that 1.9.6 works after 5 min but 1.10.1 seams like stuck somewhere, can you maybe help me ?

Error Message after 45min:

fatal: [master]: FAILED! => {"changed": true, "cmd": ["kubeadm", "init", "--token=ulb08a.n3x5aeltc320nrkc", "--apiserver-advertise-address=192.168.1.97"], "delta": "0:32:48.680498", "end": "2018-04-18 21:06:59.103374", "msg": "non-zero return code", "rc": 1, "start": "2018-04-18 20:34:10.422876", "stderr": "\t[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.2-ce. Max validated version: 17.03\n\t[WARNING FileExisting-crictl]: crictl not found in system path\nSuggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl\ncouldn't initialize a Kubernetes cluster", "stderr_lines": ["\t[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.2-ce. Max validated version: 17.03", "\t[WARNING FileExisting-crictl]: crictl not found in system path", "Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl", "couldn't initialize a Kubernetes cluster"], "stdout": "[init] Using Kubernetes version: v1.10.1\n[init] Using Authorization modes: [Node RBAC]\n[preflight] Running pre-flight checks.\n[certificates] Generated ca certificate and key.\n[certificates] Generated apiserver certificate and key.\n[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.97]\n[certificates] Generated apiserver-kubelet-client certificate and key.\n[certificates] Generated etcd/ca certificate and key.\n[certificates] Generated etcd/server certificate and key.\n[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]\n[certificates] Generated etcd/peer certificate and key.\n[certificates] etcd/peer serving cert is signed for DNS names [master] and IPs [192.168.1.97]\n[certificates] Generated etcd/healthcheck-client certificate and key.\n[certificates] Generated apiserver-etcd-client certificate and key.\n[certificates] Generated sa key and public key.\n[certificates] Generated front-proxy-ca certificate and key.\n[certificates] Generated front-proxy-client certificate and key.\n[certificates] Valid certificates and keys now exist in \"/etc/kubernetes/pki\"\n[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/admin.conf\"\n[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/kubelet.conf\"\n[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/controller-manager.conf\"\n[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/scheduler.conf\"\n[controlplane] Wrote Static Pod manifest for component kube-apiserver to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\n[controlplane] Wrote Static Pod manifest for component kube-controller-manager to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\n[controlplane] Wrote Static Pod manifest for component kube-scheduler to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n[etcd] Wrote Static Pod manifest for a local etcd instance to \"/etc/kubernetes/manifests/etcd.yaml\"\n[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory \"/etc/kubernetes/manifests\".\n[init] This might take a minute or longer if the control plane images have to be pulled.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\t- Either there is no internet connection, or imagePullPolicy is set to \"Never\",\n\t so the kubelet cannot pull or find the following control plane images:\n\t\t- k8s.gcr.io/kube-apiserver-arm:v1.10.1\n\t\t- k8s.gcr.io/kube-controller-manager-arm:v1.10.1\n\t\t- k8s.gcr.io/kube-scheduler-arm:v1.10.1\n\t\t- k8s.gcr.io/etcd-arm:3.1.12 (only if no external etcd endpoints are configured)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'", "stdout_lines": ["[init] Using Kubernetes version: v1.10.1", "[init] Using Authorization modes: [Node RBAC]", "[preflight] Running pre-flight checks.", "[certificates] Generated ca certificate and key.", "[certificates] Generated apiserver certificate and key.", "[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.97]", "[certificates] Generated apiserver-kubelet-client certificate and key.", "[certificates] Generated etcd/ca certificate and key.", "[certificates] Generated etcd/server certificate and key.", "[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]", "[certificates] Generated etcd/peer certificate and key.", "[certificates] etcd/peer serving cert is signed for DNS names [master] and IPs [192.168.1.97]", "[certificates] Generated etcd/healthcheck-client certificate and key.", "[certificates] Generated apiserver-etcd-client certificate and key.", "[certificates] Generated sa key and public key.", "[certificates] Generated front-proxy-ca certificate and key.", "[certificates] Generated front-proxy-client certificate and key.", "[certificates] Valid certificates and keys now exist in \"/etc/kubernetes/pki\"", "[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/admin.conf\"", "[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/kubelet.conf\"", "[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/controller-manager.conf\"", "[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/scheduler.conf\"", "[controlplane] Wrote Static Pod manifest for component kube-apiserver to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"", "[controlplane] Wrote Static Pod manifest for component kube-controller-manager to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"", "[controlplane] Wrote Static Pod manifest for component kube-scheduler to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"", "[etcd] Wrote Static Pod manifest for a local etcd instance to \"/etc/kubernetes/manifests/etcd.yaml\"", "[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory \"/etc/kubernetes/manifests\".", "[init] This might take a minute or longer if the control plane images have to be pulled.", "", "Unfortunately, an error has occurred:", "\ttimed out waiting for the condition", "", "This error is likely caused by:", "\t- The kubelet is not running", "\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)", "\t- Either there is no internet connection, or imagePullPolicy is set to \"Never\",", "\t so the kubelet cannot pull or find the following control plane images:", "\t\t- k8s.gcr.io/kube-apiserver-arm:v1.10.1", "\t\t- k8s.gcr.io/kube-controller-manager-arm:v1.10.1", "\t\t- k8s.gcr.io/kube-scheduler-arm:v1.10.1", "\t\t- k8s.gcr.io/etcd-arm:3.1.12 (only if no external etcd endpoints are configured)", "", "If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:", "\t- 'systemctl status kubelet'", "\t- 'journalctl -xeu kubelet'"]}

carlosroman commented 6 years ago

Thanks for rasing an issue, sorry didn't see it earlier. Are you able by any chance to pull the image manually? Also, k8s 1.10.2 was released recently. Have you had a chance to try it against that?