Project31 / ansible-kubernetes-openshift-pi3

Ansible playbooks for setting up a Kubernetes Raspberry Pi 3 cluster
190 stars 55 forks source link

After clean Install: Port occupied #36

Open dklueh79 opened 7 years ago

dklueh79 commented 7 years ago

During ansible-playbook -i hosts kubernetes.yml:

ASK [kubernetes : Run kubeadm init on master] **** fatal: [192.168.0.230]: FAILED! => {"changed": true, "cmd": ["kubeadm", "init", "--config", "/etc/kubernetes/kubeadm.yml"], "delta": "0:00:06.811351", "end": "2017-10-22 15:50:01.583502", "failed": true, "rc": 2, "start": "2017-10-22 15:49:54.772151", "stderr": "[preflight] Some fatal errors occurred:\n\tPort 10250 is in use\n\tPort 10251 is in use\n\tPort 10252 is in use\n\t/etc/kubernetes/manifests is not empty\n\tPort 2379 is in use\n\t/var/lib/etcd is not empty\n[preflight] If you know what you are doing, you can skip pre-flight checks with --skip-preflight-checks", "stdout": "[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.\n[init] Using Kubernetes version: v1.8.2-beta.0\n[init] Using Authorization modes: [Node RBAC]\n[preflight] Running pre-flight checks", "stdout_lines": ["[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.", "[init] Using Kubernetes version: v1.8.2-beta.0", "[init] Using Authorization modes: [Node RBAC]", "[preflight] Running pre-flight checks"], "warnings": []} to retry, use: --limit @/root/k8s-pi/kubernetes.retry

rhuss commented 7 years ago

Sorry, the current check for whether Kubernetes is running or not is a bit limited. It used kubectl get nodes and if this fails with error code 1 then it is assumed that no cluster is runnin gand kubeadm init is called again.

I think this should be made more robuts. Any idea ? (maybe we should run kubeadm upgrade plan or so ....)

dklueh79 commented 7 years ago

Any solution for completing your kubernetes setup?

rhuss commented 7 years ago

@dklueh79 wdym ? Actually, the current detection works when Kubernetes has been properly installed and nodes are running. So in this case the kubeadm init step is skipped. However when the initial setup didnt work and you are in a half-baked state, the kubectl get nodes fails, but kubeadm init also fails. One should properly do a full reset then.

So you should try a full reset when this error ocurs for you before trying again:

ansible-playbook -i hosts kubernetes-full-reset.yml