Closed jmreicha closed 6 years ago
Hi @jmreicha.
This is a warning from Ansible. We'll work on this. But, your cluster should be fine.
@chris-short Thanks for the quick response.
Unfortunately none of the containers have come up on the master. There is an error [ERROR KubeletVersion]: couldn't get kubelet version: exit status 2
. Sure enough, when I do a kubelet version
I get a traceback:
pi@kube-master:~ $ kubelet version
unexpected fault address 0x15679b00
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x2 addr=0x15679b00 pc=0x15679b00]
goroutine 1 [running, locked to thread]:
runtime.throw(0x2a84a9e, 0x5)
/usr/local/go/src/runtime/panic.go:605 +0x70 fp=0x15965e98 sp=0x15965e8c pc=0x3efa4
runtime.sigpanic()
/usr/local/go/src/runtime/signal_unix.go:374 +0x1cc fp=0x15965ebc sp=0x15965e98 pc=0x5517c
k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.SemVer.Empty(...)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go:68
k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.NewSemVer(0x1539d038, 0x20945b4, 0x2a8fbcf, 0xb, 0x1538ba70)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go:41 +0x90 fp=0x15965f58 sp=0x15965ec0 pc=0x206c8d8
goroutine 5 [chan receive]:
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x4551f48)
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:879 +0x70
created by k8s.io/kubernetes/vendor/github.com/golang/glog.init.0
/workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:410 +0x1a0
goroutine 50 [syscall]:
os/signal.signal_recv(0x2bd146c)
/usr/local/go/src/runtime/sigqueue.go:131 +0x134
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:22 +0x14
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:28 +0x30
One thing I have found is that some others are having issues with k8s 1.10.3 so I think I might make a PR for specifying the version, if I can get it working with an older version.
I can confirm that the deployment of a kubernetes
cluster with rak8s
isn't working any more.
I have tested the deployment of kubernetes multiple times to day, on fresh images and I get the same error as jmreicha. A couple of week before the deployment went successful with the same setup. I expect that upstream changes in kubernetes
or Docker
has caused the issue.
pi@ansible-node ~/git/rak8s (master) $ uname -a
Linux ansible-node 4.9.35-v7+ #1014 SMP Fri Jun 30 14:47:43 BST 2017 armv7l GNU/Linux
pi@ansilble-host ~/git/rak8s (master) $ ansible --version
ansible 2.2.0.0
config file = /home/pi/git/rak8s/ansible.cfg
configured module search path = Default w/o overrides
pi@ansible-node ~/git/rak8s (master) $ ansible-playbook cluster.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [node2]
ok: [master]
ok: [node1]
TASK [common : Enabling cgroup options at boot] ********************************
changed: [master]
changed: [node1]
changed: [node2]
TASK [common : apt-get update] *************************************************
ok: [master]
ok: [node2]
ok: [node1]
TASK [common : apt-get upgrade] ************************************************
ok: [master]
ok: [node2]
ok: [node1]
TASK [common : Reboot] *********************************************************
ok: [master]
ok: [node2]
ok: [node1]
TASK [common : Wait for Reboot] ************************************************
ok: [master -> localhost]
ok: [node1 -> localhost]
ok: [node2 -> localhost]
TASK [kubeadm : Disable Swap] **************************************************
changed: [master]
changed: [node1]
changed: [node2]
TASK [kubeadm : Determine if docker is installed] ******************************
ok: [node2]
ok: [master]
ok: [node1]
TASK [kubeadm : Run Docker Install Script] *************************************
changed: [master]
changed: [node2]
changed: [node1]
TASK [kubeadm : Pass bridged IPv4 traffic to iptables' chains] *****************
changed: [master]
changed: [node1]
changed: [node2]
TASK [kubeadm : Install apt-transport-https] ***********************************
ok: [master]
ok: [node2]
ok: [node1]
TASK [kubeadm : Add Google Cloud Repo Key] *************************************
changed: [master]
[WARNING]: Consider using get_url or uri module rather than running curl
changed: [node1]
changed: [node2]
TASK [kubeadm : Add Kubernetes to Available apt Sources] ***********************
changed: [master]
changed: [node1]
changed: [node2]
TASK [kubeadm : apt-get update] ************************************************
changed: [node2]
changed: [master]
changed: [node1]
TASK [kubeadm : Install k8s Y'all] *********************************************
changed: [master] => (item=[u'kubelet', u'kubeadm', u'kubectl'])
changed: [node2] => (item=[u'kubelet', u'kubeadm', u'kubectl'])
changed: [node1] => (item=[u'kubelet', u'kubeadm', u'kubectl'])
PLAY [master] ******************************************************************
TASK [master : Reset Kubernetes Master] ****************************************
changed: [master]
TASK [master : Initialize Master] **********************************************
fatal: [master]: FAILED! => {"changed": true, "cmd": "kubeadm init --apiserver-advertise-address=192.168.11.210 --token=udy29x.ugyyk3tumg27atmr", "delta": "0:00:02.374371", "end": "2018-05-28 20:40:20.106626", "failed": true, "rc": 2, "start": "2018-05-28 20:40:17.732255", "stderr": "\t
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.05.0-ce. Max validated version: 17.03\n\t
[WARNING FileExisting-crictl]: crictl not found in system path\nSuggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl\n[preflight] Some fatal errors occurred:\n\t
[ERROR KubeletVersion]: couldn't get kubelet version: exit status 2\n[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`", "stdout": "
[init] Using Kubernetes version: v1.10.3\n[init] Using Authorization modes: [Node RBAC]\n[preflight] Running pre-flight checks.", "stdout_lines": ["[init] Using Kubernetes version: v1.10.3", "[init] Using Authorization modes: [Node RBAC]", "[preflight] Running pre-flight checks."], "warnings": []}
PLAY RECAP *********************************************************************
master : ok=16 changed=9 unreachable=0 failed=1
node1 : ok=15 changed=8 unreachable=0 failed=0
node2 : ok=15 changed=8 unreachable=0 failed=0
On the master:
pi@master:~ $ sudo journalctl -u kubelet
-- Logs begin at Thu 2016-11-03 18:16:42 CET, end at Mon 2018-05-28 21:18:05 CEST. --
May 28 20:39:39 master systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 28 20:39:40 master kubelet[2897]: unexpected fault address 0x166c0d50
May 28 20:39:40 master kubelet[2897]: fatal error: fault
May 28 20:39:40 master kubelet[2897]: [signal SIGSEGV: segmentation violation code=0x2 addr=0x166c0d50 pc=0x166c0d50]
May 28 20:39:40 master kubelet[2897]: goroutine 1 [running, locked to thread]:
May 28 20:39:40 master kubelet[2897]: runtime.throw(0x2a84a9e, 0x5)
May 28 20:39:40 master kubelet[2897]: /usr/local/go/src/runtime/panic.go:605 +0x70 fp=0x16c13e98 sp=0x16c13e8c pc=0x3efa4
May 28 20:39:40 master kubelet[2897]: runtime.sigpanic()
May 28 20:39:40 master kubelet[2897]: /usr/local/go/src/runtime/signal_unix.go:374 +0x1cc fp=0x16c13ebc sp=0x16c13e98 pc=0x5517c
May 28 20:39:40 master kubelet[2897]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.SemVer.Empty(...)
May 28 20:39:40 master kubelet[2897]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go:68
May 28 20:39:40 master kubelet[2897]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.NewSemVer(0x165b9168, 0x20945b4, 0x2a8fbcf, 0xb, 0x16471860)
May 28 20:39:40 master kubelet[2897]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go:41 +0x90 fp=0x16c13f58 sp=0x16c13ec0 pc=0x206c8d8
May 28 20:39:40 master kubelet[2897]: goroutine 20 [chan receive]:
May 28 20:39:40 master kubelet[2897]: k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x4551f48)
May 28 20:39:40 master kubelet[2897]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:879 +0x70
May 28 20:39:40 master kubelet[2897]: created by k8s.io/kubernetes/vendor/github.com/golang/glog.init.0
May 28 20:39:40 master kubelet[2897]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:410 +0x1a0
May 28 20:39:40 master kubelet[2897]: goroutine 70 [syscall]:
May 28 20:39:40 master kubelet[2897]: os/signal.signal_recv(0x2bd146c)
May 28 20:39:40 master kubelet[2897]: /usr/local/go/src/runtime/sigqueue.go:131 +0x134
May 28 20:39:40 master kubelet[2897]: os/signal.loop()
May 28 20:39:40 master kubelet[2897]: /usr/local/go/src/os/signal/signal_unix.go:22 +0x14
May 28 20:39:40 master kubelet[2897]: created by os/signal.init.0
May 28 20:39:40 master kubelet[2897]: /usr/local/go/src/os/signal/signal_unix.go:28 +0x30
May 28 20:39:40 master systemd[1]: kubelet.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
May 28 20:39:40 master systemd[1]: kubelet.service: Unit entered failed state.
May 28 20:39:40 master systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 28 20:39:40 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 28 20:39:40 master systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 28 20:39:40 master kubelet[2921]: unexpected fault address 0x16637230
May 28 20:39:40 master kubelet[2921]: fatal error: fault
May 28 20:39:40 master kubelet[2921]: [signal SIGSEGV: segmentation violation code=0x2 addr=0x16637230 pc=0x16637230]
May 28 20:39:40 master kubelet[2921]: goroutine 1 [running, locked to thread]:
May 28 20:39:40 master kubelet[2921]: runtime.throw(0x2a84a9e, 0x5)
May 28 20:39:40 master kubelet[2921]: /usr/local/go/src/runtime/panic.go:605 +0x70 fp=0x16917e98 sp=0x16917e8c pc=0x3efa4
May 28 20:39:40 master kubelet[2921]: runtime.sigpanic()
May 28 20:39:40 master kubelet[2921]: /usr/local/go/src/runtime/signal_unix.go:374 +0x1cc fp=0x16917ebc sp=0x16917e98 pc=0x5517c
May 28 20:39:40 master kubelet[2921]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.SemVer.Empty(...)
May 28 20:39:40 master kubelet[2921]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go:68
May 28 20:39:40 master kubelet[2921]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.NewSemVer(0x164f6d98, 0x20945b4, 0x2a8fbcf, 0xb, 0x1638b7d0)
May 28 20:39:40 master kubelet[2921]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go:41 +0x90 fp=0x16917f58 sp=0x16917ec0 pc=0x206c8d8
May 28 20:39:40 master kubelet[2921]: goroutine 5 [chan receive]:
May 28 20:39:40 master kubelet[2921]: k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x4551f48)
May 28 20:39:40 master kubelet[2921]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:879 +0x70
May 28 20:39:40 master kubelet[2921]: created by k8s.io/kubernetes/vendor/github.com/golang/glog.init.0
May 28 20:39:40 master kubelet[2921]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:410 +0x1a0
May 28 20:39:40 master kubelet[2921]: goroutine 40 [syscall]:
May 28 20:39:40 master kubelet[2921]: os/signal.signal_recv(0x2bd146c)
May 28 20:39:40 master kubelet[2921]: /usr/local/go/src/runtime/sigqueue.go:131 +0x134
May 28 20:39:40 master kubelet[2921]: os/signal.loop()
May 28 20:39:40 master kubelet[2921]: /usr/local/go/src/os/signal/signal_unix.go:22 +0x14
May 28 20:39:40 master kubelet[2921]: created by os/signal.init.0
May 28 20:39:40 master kubelet[2921]: /usr/local/go/src/os/signal/signal_unix.go:28 +0x30
May 28 20:39:40 master systemd[1]: kubelet.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
May 28 20:39:40 master systemd[1]: kubelet.service: Unit entered failed state.
May 28 20:39:40 master systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 28 20:39:50 master systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
May 28 20:39:50 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 28 20:39:50 master systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 28 20:39:51 master kubelet[2950]: unexpected fault address 0x163cb380
May 28 20:39:51 master kubelet[2950]: fatal error: fault
May 28 20:39:51 master kubelet[2950]: [signal SIGSEGV: segmentation violation code=0x2 addr=0x163cb380 pc=0x163cb380]
May 28 20:39:51 master kubelet[2950]: goroutine 1 [running, locked to thread]:
May 28 20:39:51 master kubelet[2950]: runtime.throw(0x2a84a9e, 0x5)
May 28 20:39:51 master kubelet[2950]: /usr/local/go/src/runtime/panic.go:605 +0x70 fp=0x16833e98 sp=0x16833e8c pc=0x3efa4
May 28 20:39:51 master kubelet[2950]: runtime.sigpanic()
May 28 20:39:51 master kubelet[2950]: /usr/local/go/src/runtime/signal_unix.go:374 +0x1cc fp=0x16833ebc sp=0x16833e98 pc=0x5517c
May 28 20:39:51 master kubelet[2950]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.SemVer.Empty(...)
May 28 20:39:51 master kubelet[2950]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go:68
May 28 20:39:51 master kubelet[2950]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.NewSemVer(0x1629cf88, 0x20945b4, 0x2a8fbcf, 0xb, 0x1646e930)
May 28 20:39:51 master kubelet[2950]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go:41 +0x90 fp=0x16833f58 sp=0x16833ec0 pc=0x206c8d8
May 28 20:39:51 master kubelet[2950]: goroutine 5 [chan receive]:
May 28 20:39:51 master kubelet[2950]: k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x4551f48)
May 28 20:39:51 master kubelet[2950]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:879 +0x70
May 28 20:39:51 master kubelet[2950]: created by k8s.io/kubernetes/vendor/github.com/golang/glog.init.0
May 28 20:39:51 master kubelet[2950]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:410 +0x1a0
May 28 20:39:51 master kubelet[2950]: goroutine 79 [syscall]:
May 28 20:39:51 master kubelet[2950]: os/signal.signal_recv(0x2bd146c)
May 28 20:39:51 master kubelet[2950]: /usr/local/go/src/runtime/sigqueue.go:131 +0x134
May 28 20:39:51 master kubelet[2950]: os/signal.loop()
May 28 20:39:51 master kubelet[2950]: /usr/local/go/src/os/signal/signal_unix.go:22 +0x14
May 28 20:39:51 master kubelet[2950]: created by os/signal.init.0
May 28 20:39:51 master kubelet[2950]: /usr/local/go/src/os/signal/signal_unix.go:28 +0x30
May 28 20:39:51 master systemd[1]: kubelet.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
May 28 20:39:51 master systemd[1]: kubelet.service: Unit entered failed state.
May 28 20:39:51 master systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 28 20:40:01 master systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
May 28 20:40:01 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 28 20:40:01 master systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 28 20:40:01 master kubelet[2959]: unexpected fault address 0x165752f0
May 28 20:40:01 master kubelet[2959]: fatal error: fault
May 28 20:40:01 master kubelet[2959]: [signal SIGSEGV: segmentation violation code=0x2 addr=0x165752f0 pc=0x165752f0]
May 28 20:40:01 master kubelet[2959]: goroutine 1 [running, locked to thread]:
May 28 20:40:01 master kubelet[2959]: runtime.throw(0x2a84a9e, 0x5)
May 28 20:40:01 master kubelet[2959]: /usr/local/go/src/runtime/panic.go:605 +0x70 fp=0x16af7e98 sp=0x16af7e8c pc=0x3efa4
May 28 20:40:01 master kubelet[2959]: runtime.sigpanic()
May 28 20:40:01 master kubelet[2959]: /usr/local/go/src/runtime/signal_unix.go:374 +0x1cc fp=0x16af7ebc sp=0x16af7e98 pc=0x5517c
May 28 20:40:01 master kubelet[2959]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.SemVer.Empty(...)
May 28 20:40:01 master kubelet[2959]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go:68
May 28 20:40:01 master kubelet[2959]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.NewSemVer(0x1679f6f8, 0x20945b4, 0x2a8fbcf, 0xb, 0x16976ed0)
May 28 20:40:01 master kubelet[2959]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go:41 +0x90 fp=0x16af7f58 sp=0x16af7ec0 pc=0x206c8d8
May 28 20:40:01 master kubelet[2959]: goroutine 5 [chan receive]:
May 28 20:40:01 master kubelet[2959]: k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x4551f48)
May 28 20:40:01 master kubelet[2959]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:879 +0x70
May 28 20:40:01 master kubelet[2959]: created by k8s.io/kubernetes/vendor/github.com/golang/glog.init.0
May 28 20:40:01 master kubelet[2959]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:410 +0x1a0
May 28 20:40:01 master kubelet[2959]: goroutine 70 [syscall]:
May 28 20:40:01 master kubelet[2959]: os/signal.signal_recv(0x2bd146c)
May 28 20:40:01 master kubelet[2959]: /usr/local/go/src/runtime/sigqueue.go:131 +0x134
May 28 20:40:01 master kubelet[2959]: os/signal.loop()
May 28 20:40:01 master kubelet[2959]: /usr/local/go/src/os/signal/signal_unix.go:22 +0x14
May 28 20:40:01 master kubelet[2959]: created by os/signal.init.0
May 28 20:40:01 master kubelet[2959]: /usr/local/go/src/os/signal/signal_unix.go:28 +0x30
May 28 20:40:01 master systemd[1]: kubelet.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
May 28 20:40:01 master systemd[1]: kubelet.service: Unit entered failed state.
May 28 20:40:01 master systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 28 20:40:11 master systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
May 28 20:40:11 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 28 20:40:11 master systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 28 20:40:12 master kubelet[2969]: unexpected fault address 0x14fa7d10
May 28 20:40:12 master kubelet[2969]: fatal error: fault
May 28 20:40:12 master kubelet[2969]: [signal SIGSEGV: segmentation violation code=0x2 addr=0x14fa7d10 pc=0x14fa7d10]
May 28 20:40:12 master kubelet[2969]: goroutine 1 [running, locked to thread]:
May 28 20:40:12 master kubelet[2969]: runtime.throw(0x2a84a9e, 0x5)
May 28 20:40:12 master kubelet[2969]: /usr/local/go/src/runtime/panic.go:605 +0x70 fp=0x15517e98 sp=0x15517e8c pc=0x3efa4
May 28 20:40:12 master kubelet[2969]: runtime.sigpanic()
May 28 20:40:12 master kubelet[2969]: /usr/local/go/src/runtime/signal_unix.go:374 +0x1cc fp=0x15517ebc sp=0x15517e98 pc=0x5517c
May 28 20:40:12 master kubelet[2969]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.SemVer.Empty(...)
May 28 20:40:12 master kubelet[2969]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go:68
May 28 20:40:12 master kubelet[2969]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.NewSemVer(0x15067030, 0x20945b4, 0x2a8fbcf, 0xb, 0x14f8d470)
May 28 20:40:12 master kubelet[2969]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go:41 +0x90 fp=0x15517f58 sp=0x15517ec0 pc=0x206c8d8
May 28 20:40:12 master kubelet[2969]: goroutine 5 [chan receive]:
May 28 20:40:12 master kubelet[2969]: k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x4551f48)
May 28 20:40:12 master kubelet[2969]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:879 +0x70
May 28 20:40:12 master kubelet[2969]: created by k8s.io/kubernetes/vendor/github.com/golang/glog.init.0
May 28 20:40:12 master kubelet[2969]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:410 +0x1a0
May 28 20:40:12 master kubelet[2969]: goroutine 55 [syscall]:
May 28 20:40:12 master kubelet[2969]: os/signal.signal_recv(0x2bd146c)
May 28 20:40:12 master kubelet[2969]: /usr/local/go/src/runtime/sigqueue.go:131 +0x134
May 28 20:40:12 master kubelet[2969]: os/signal.loop()
May 28 20:40:12 master kubelet[2969]: /usr/local/go/src/os/signal/signal_unix.go:22 +0x14
May 28 20:40:12 master kubelet[2969]: created by os/signal.init.0
May 28 20:40:12 master kubelet[2969]: /usr/local/go/src/os/signal/signal_unix.go:28 +0x30
May 28 20:40:12 master systemd[1]: kubelet.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
May 28 20:40:12 master systemd[1]: kubelet.service: Unit entered failed state.
May 28 20:40:12 master systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 28 20:40:16 master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
On a node:
pi@node1:~ $ sudo journalctl -u kubelet
-- Logs begin at Mon 2018-05-28 20:35:02 CEST, end at Mon 2018-05-28 21:21:48 CEST. --
May 28 20:40:13 node1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 28 20:40:14 node1 kubelet[2887]: unexpected fault address 0x1653ec00
May 28 20:40:14 node1 kubelet[2887]: fatal error: fault
May 28 20:40:14 node1 kubelet[2887]: [signal SIGSEGV: segmentation violation code=0x2 addr=0x1653ec00 pc=0x1653ec00]
May 28 20:40:14 node1 kubelet[2887]: goroutine 1 [running, locked to thread]:
May 28 20:40:14 node1 kubelet[2887]: runtime.throw(0x2a84a9e, 0x5)
May 28 20:40:14 node1 kubelet[2887]: /usr/local/go/src/runtime/panic.go:605 +0x70 fp=0x16b09e98 sp=0x16b09e8c pc=0x3efa4
May 28 20:40:14 node1 kubelet[2887]: runtime.sigpanic()
May 28 20:40:14 node1 kubelet[2887]: /usr/local/go/src/runtime/signal_unix.go:374 +0x1cc fp=0x16b09ebc sp=0x16b09e98 pc=0x5517c
May 28 20:40:14 node1 kubelet[2887]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.SemVer.Empty(...)
May 28 20:40:14 node1 kubelet[2887]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go
May 28 20:40:14 node1 kubelet[2887]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.NewSemVer(0x165bc890, 0x20945b4, 0x2a8fbcf, 0xb, 0x163714d0)
May 28 20:40:14 node1 kubelet[2887]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go
May 28 20:40:14 node1 kubelet[2887]: goroutine 35 [chan receive]:
May 28 20:40:14 node1 kubelet[2887]: k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x4551f48)
May 28 20:40:14 node1 kubelet[2887]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:879 +0x70
May 28 20:40:14 node1 kubelet[2887]: created by k8s.io/kubernetes/vendor/github.com/golang/glog.init.0
May 28 20:40:14 node1 kubelet[2887]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:410 +0x1a0
May 28 20:40:14 node1 kubelet[2887]: goroutine 20 [syscall]:
May 28 20:40:14 node1 kubelet[2887]: os/signal.signal_recv(0x2bd146c)
May 28 20:40:14 node1 kubelet[2887]: /usr/local/go/src/runtime/sigqueue.go:131 +0x134
May 28 20:40:14 node1 kubelet[2887]: os/signal.loop()
May 28 20:40:14 node1 kubelet[2887]: /usr/local/go/src/os/signal/signal_unix.go:22 +0x14
May 28 20:40:14 node1 kubelet[2887]: created by os/signal.init.0
May 28 20:40:14 node1 kubelet[2887]: /usr/local/go/src/os/signal/signal_unix.go:28 +0x30
May 28 20:40:14 node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
May 28 20:40:14 node1 systemd[1]: kubelet.service: Unit entered failed state.
May 28 20:40:14 node1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 28 20:40:14 node1 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 28 20:40:14 node1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 28 20:40:14 node1 kubelet[2909]: unexpected fault address 0x15796f30
May 28 20:40:14 node1 kubelet[2909]: fatal error: fault
May 28 20:40:14 node1 kubelet[2909]: [signal SIGSEGV: segmentation violation code=0x2 addr=0x15796f30 pc=0x15796f30]
May 28 20:40:14 node1 kubelet[2909]: goroutine 1 [running, locked to thread]:
May 28 20:40:14 node1 kubelet[2909]: runtime.throw(0x2a84a9e, 0x5)
May 28 20:40:14 node1 kubelet[2909]: /usr/local/go/src/runtime/panic.go:605 +0x70 fp=0x15f25e98 sp=0x15f25e8c pc=0x3efa4
May 28 20:40:14 node1 kubelet[2909]: runtime.sigpanic()
May 28 20:40:14 node1 kubelet[2909]: /usr/local/go/src/runtime/signal_unix.go:374 +0x1cc fp=0x15f25ebc sp=0x15f25e98 pc=0x5517c
May 28 20:40:14 node1 kubelet[2909]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.SemVer.Empty(...)
May 28 20:40:14 node1 kubelet[2909]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go
May 28 20:40:14 node1 kubelet[2909]: k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types.NewSemVer(0x15b7ac38, 0x20945b4, 0x2a8fbcf, 0xb, 0x1577a8a0)
May 28 20:40:14 node1 kubelet[2909]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/appc/spec/schema/types/semver.go
May 28 20:40:14 node1 kubelet[2909]: goroutine 20 [chan receive]:
May 28 20:40:14 node1 kubelet[2909]: k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x4551f48)
May 28 20:40:14 node1 kubelet[2909]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:879 +0x70
May 28 20:40:14 node1 kubelet[2909]: created by k8s.io/kubernetes/vendor/github.com/golang/glog.init.0
May 28 20:40:14 node1 kubelet[2909]: /workspace/anago-v1.10.3-beta.0.74+2bba0127d85d5a/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:410 +0x1a0
May 28 20:40:14 node1 kubelet[2909]: goroutine 63 [syscall]:
May 28 20:40:14 node1 kubelet[2909]: os/signal.signal_recv(0x2bd146c)
May 28 20:40:14 node1 kubelet[2909]: /usr/local/go/src/runtime/sigqueue.go:131 +0x134
3B+
pi@ansible-node ~/git/rak8s (master) $ ansible all -m shell -a 'cat /etc/os-release'
node1 | SUCCESS | rc=0 >>
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
node2 | SUCCESS | rc=0 >>
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
master | SUCCESS | rc=0 >>
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
I did run the cluster.yml
playbook on 3 raspberries with fresh raspbian lite images (18-04-2018). I prepared the nodes in the exact same way as I did many times before in the past few months, but this issue is new to me.
The ansible playbook exited with a couple of messages. Only one is really fatal, the same as jmreicha got:
[ERROR KubeletVersion]: couldn't get kubelet version: exit status 2
The other messages are just warning:
Also the other message jmreicha got is a harmless warning:
result|succeeded
instead use result is succeeded
. This feature will be removed in version 2.9. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. There are no images pulled on the master and the nodes:
pi@ansible-node ~/git/rak8s (master) $ ansible all -m shell -a 'sudo docker images'
node1 | SUCCESS | rc=0 >>
REPOSITORY TAG IMAGE ID CREATED SIZE
master | SUCCESS | rc=0 >>
REPOSITORY TAG IMAGE ID CREATED SIZE
node2 | SUCCESS | rc=0 >>
REPOSITORY TAG IMAGE ID CREATED SIZE
I expect that the issue is caused due to a upstream change. This Kubernetes
cluster is based on several components:
During the last few months the versions of these components frequently changed:
$ sudo apt-cache policy docker-ce
docker-ce:
Installed: 18.05.0~ce~3-0~raspbian
Candidate: 18.05.0~ce~3-0~raspbian
Version table:
*** 18.05.0~ce~3-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
100 /var/lib/dpkg/status
18.04.0~ce~3-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
18.03.1~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
18.03.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
18.02.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
18.01.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
17.12.1~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
17.12.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
17.11.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
17.10.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
17.09.1~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
17.09.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
$ sudo apt-cache policy kubeadm
kubeadm:
Installed: 1.10.3-00
Candidate: 1.10.3-00
Version table:
*** 1.10.3-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
100 /var/lib/dpkg/status
1.10.2-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.10.1-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.10.0-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.9.8-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.9.7-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.9.6-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.9.5-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
$ sudo apt-cache policy kubelet
kubelet:
Installed: 1.10.3-00
Candidate: 1.10.3-00
Version table:
*** 1.10.3-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
100 /var/lib/dpkg/status
1.10.2-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.10.1-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.10.0-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.9.8-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.9.7-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.9.6-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.9.5-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
$ sudo apt-cache policy kubectl
kubectl:
Installed: 1.10.3-00
Candidate: 1.10.3-00
Version table:
*** 1.10.3-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
100 /var/lib/dpkg/status
1.10.2-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.10.1-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.10.0-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.9.8-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.9.7-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.9.6-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
1.9.5-00 500
500 http://apt.kubernetes.io kubernetes-xenial/main armhf Packages
If we want to guarantee a stable Kubernetes
deployment with rak8s
we should at least fix
the version of all the component to a combination that has been proven to be working.
I will prepare a pull request for fixing components to a specific version.
FWIW I have it working with Kubernetes 1.10.2 and Docker 18.04.
Quote jmreicha:
FWIW I have it working with Kubernetes 1.10.2 and Docker 18.04.
jmreicha, thanks for the update.
If have fixed the versions of kubernetes
and the kubelet
, kubeadm
and kubectl
packages in the playbook to 1.10.2, see below:
roles/kubeadm/tasks/main.yml
<snippet>
- name: Install k8s Y'all
apt:
name: "{{ item }}=1.10.2-00"
state: present
force: yes
with_items:
- kubelet
- kubeadm
- kubectl
roles/master/tasks/main.yml
- name: Initialize Master
shell: kubeadm init --apiserver-advertise-address={{ ansible_default_ipv4.address }} --token={{ token }} --kubernetes-version=v1.10.2
register: kubeadm_init
when: kubeadm_reset|succeeded
Then I removed the what was there of kubernetes
and the 'docker-ce,
kubelet,
kubeadmand
kubectl` packages:
pi@ted1090-5 ~/git/rak8s (testing) $ ansible all -m shell -a 'sudo kubeadm reset'
master | SUCCESS | rc=0 >>
[preflight] Running pre-flight checks.
[reset] Stopping the kubelet service.
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers.
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
node2 | SUCCESS | rc=0 >>
[preflight] Running pre-flight checks.
[reset] Stopping the kubelet service.
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers.
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
node1 | SUCCESS | rc=0 >>
[preflight] Running pre-flight checks.
[reset] Stopping the kubelet service.
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers.
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
pi@ted1090-5 ~/git/rak8s (testing) $ ansible all -m shell -a 'sudo apt-get purge kubeadm kubectl kubelet docker-ce -y'
master | SUCCESS | rc=0 >>
Reading package lists...
Building dependency tree...
Reading state information...
The following packages were automatically installed and are no longer required:
ebtables ethtool kubernetes-cni libltdl7 socat
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
docker-ce* kubeadm* kubectl* kubelet*
0 upgraded, 0 newly installed, 4 to remove and 0 not upgraded.
After this operation, 433 MB disk space will be freed.
(Reading database ... 38610 files and directories currently installed.)
Removing docker-ce (18.05.0~ce~3-0~raspbian) ...
Warning: Stopping docker.service, but it can still be activated by:
docker.socket
Removing kubeadm (1.10.3-00) ...
Removing kubectl (1.10.3-00) ...
Removing kubelet (1.10.3-00) ...
Processing triggers for man-db (2.7.6.1-2) ...
(Reading database ... 38391 files and directories currently installed.)
Purging configuration files for docker-ce (18.05.0~ce~3-0~raspbian) ...
Purging configuration files for kubelet (1.10.3-00) ...
Purging configuration files for kubeadm (1.10.3-00) ...
Processing triggers for systemd (232-25+deb9u2) ...
node2 | SUCCESS | rc=0 >>
Reading package lists...
Building dependency tree...
Reading state information...
The following packages were automatically installed and are no longer required:
ebtables ethtool kubernetes-cni libltdl7 socat
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
docker-ce* kubeadm* kubectl* kubelet*
0 upgraded, 0 newly installed, 4 to remove and 0 not upgraded.
After this operation, 433 MB disk space will be freed.
(Reading database ... 38610 files and directories currently installed.)
Removing docker-ce (18.05.0~ce~3-0~raspbian) ...
Removing kubeadm (1.10.3-00) ...
Removing kubectl (1.10.3-00) ...
Removing kubelet (1.10.3-00) ...
Processing triggers for man-db (2.7.6.1-2) ...
(Reading database ... 38391 files and directories currently installed.)
Purging configuration files for docker-ce (18.05.0~ce~3-0~raspbian) ...
Purging configuration files for kubelet (1.10.3-00) ...
Purging configuration files for kubeadm (1.10.3-00) ...
Processing triggers for systemd (232-25+deb9u2) ...
node1 | SUCCESS | rc=0 >>
Reading package lists...
Building dependency tree...
Reading state information...
The following packages were automatically installed and are no longer required:
ebtables ethtool kubernetes-cni libltdl7 socat
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
docker-ce* kubeadm* kubectl* kubelet*
0 upgraded, 0 newly installed, 4 to remove and 0 not upgraded.
After this operation, 433 MB disk space will be freed.
(Reading database ... 38610 files and directories currently installed.)
Removing docker-ce (18.05.0~ce~3-0~raspbian) ...
Warning: Stopping docker.service, but it can still be activated by:
docker.socket
Removing kubeadm (1.10.3-00) ...
Removing kubectl (1.10.3-00) ...
Removing kubelet (1.10.3-00) ...
Processing triggers for man-db (2.7.6.1-2) ...
(Reading database ... 38391 files and directories currently installed.)
Purging configuration files for docker-ce (18.05.0~ce~3-0~raspbian) ...
Purging configuration files for kubelet (1.10.3-00) ...
Purging configuration files for kubeadm (1.10.3-00) ...
Processing triggers for systemd (232-25+deb9u2) ...
This leaves use nearly fresh nodes.
After a reboot of the nodes, I re-run the cluster.yml
playbook:
pi@ansible-node ~/git/rak8s (testing) $ ansible-playbook cluster.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [node2]
ok: [node1]
ok: [master]
TASK [common : Enabling cgroup options at boot] ********************************
ok: [master]
ok: [node1]
ok: [node2]
TASK [common : apt-get update] *************************************************
ok: [node1]
ok: [node2]
ok: [master]
TASK [common : apt-get upgrade] ************************************************
ok: [master]
ok: [node2]
ok: [node1]
TASK [common : Reboot] *********************************************************
skipping: [master]
skipping: [node1]
skipping: [node2]
TASK [common : Wait for Reboot] ************************************************
skipping: [master]
skipping: [node1]
skipping: [node2]
TASK [kubeadm : Disable Swap] **************************************************
changed: [node2]
changed: [master]
changed: [node1]
TASK [kubeadm : Determine if docker is installed] ******************************
ok: [node1]
ok: [master]
ok: [node2]
TASK [kubeadm : Run Docker Install Script] *************************************
changed: [master]
changed: [node2]
changed: [node1]
TASK [kubeadm : Pass bridged IPv4 traffic to iptables' chains] *****************
ok: [master]
ok: [node1]
ok: [node2]
TASK [kubeadm : Install apt-transport-https] ***********************************
ok: [master]
ok: [node2]
ok: [node1]
TASK [kubeadm : Add Google Cloud Repo Key] *************************************
changed: [master]
[WARNING]: Consider using get_url or uri module rather than running curl
changed: [node1]
changed: [node2]
TASK [kubeadm : Add Kubernetes to Available apt Sources] ***********************
ok: [master]
ok: [node2]
ok: [node1]
TASK [kubeadm : apt-get update] ************************************************
changed: [node1]
changed: [node2]
changed: [master]
TASK [kubeadm : Install k8s Y'all] *********************************************
changed: [master] => (item=[u'kubelet=1.10.2-00', u'kubeadm=1.10.2-00', u'kubectl=1.10.2-00'])
changed: [node1] => (item=[u'kubelet=1.10.2-00', u'kubeadm=1.10.2-00', u'kubectl=1.10.2-00'])
changed: [node2] => (item=[u'kubelet=1.10.2-00', u'kubeadm=1.10.2-00', u'kubectl=1.10.2-00'])
PLAY [master] ******************************************************************
TASK [master : Reset Kubernetes Master] ****************************************
changed: [master]
TASK [master : Initialize Master] **********************************************
fatal: [master]: FAILED! => {"changed": true, "cmd": "kubeadm init --apiserver-advertise-address=192.168.11.210 --token=udy29x.ugyyk3tumg27atmr --kubernetes-version=v1.10.2", "delta": "0:31:20.396269", "end": "2018-05-29 07:31:23.306641", "failed": true, "rc": 1, "start": "2018-05-29 07:00:02.910372", "stderr": "\t
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.05.0-ce. Max validated version: 17.03\n\t
[WARNING FileExisting-crictl]: crictl not found in system path\nSuggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl\ncouldn't initialize a Kubernetes cluster", "stdout": "[init] Using Kubernetes version: v1.10.2\n
[init] Using Authorization modes: [Node RBAC]\n[preflight] Running pre-flight checks.\n
[preflight] Starting the kubelet service\n
[certificates] Generated ca certificate and key.\n
[certificates] Generated apiserver certificate and key.\n[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.210]\n
[certificates] Generated apiserver-kubelet-client certificate and key.\n
[certificates] Generated etcd/ca certificate and key.\n
[certificates] Generated etcd/server certificate and key.\n
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]\n
[certificates] Generated etcd/peer certificate and key.\n[certificates] etcd/peer serving cert is signed for DNS names [master] and IPs [192.168.11.210]\n
[certificates] Generated etcd/healthcheck-client certificate and key.\n
[certificates] Generated apiserver-etcd-client certificate and key.\n
[certificates] Generated sa key and public key.\n
[certificates] Generated front-proxy-ca certificate and key.\n
[certificates] Generated front-proxy-client certificate and key.\n
[certificates] Valid certificates and keys now exist in \"/etc/kubernetes/pki\"\n
[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/admin.conf\"\n
[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/kubelet.conf\"\n
[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/controller-manager.conf\"\n
[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/scheduler.conf\"\n
[controlplane] Wrote Static Pod manifest for component kube-apiserver to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"\n
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"\n
[controlplane] Wrote Static Pod manifest for component kube-scheduler to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"\n
[etcd] Wrote Static Pod manifest for a local etcd instance to \"/etc/kubernetes/manifests/etcd.yaml\"\n
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory \"/etc/kubernetes/manifests\".\n[init] This might take a minute or longer if the control plane images have to be pulled.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\t- Either there is no internet connection, or imagePullPolicy is set to \"Never\",\n\t so the kubelet cannot pull or find the following control plane images:\n\t\t- k8s.gcr.io/kube-apiserver-arm:v1.10.2\n\t\t- k8s.gcr.io/kube-controller-manager-arm:v1.10.2\n\t\t- k8s.gcr.io/kube-scheduler-arm:v1.10.2\n\t\t- k8s.gcr.io/etcd-arm:3.1.12 (only if no external etcd endpoints are configured)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'", "stdout_lines": ["
[init] Using Kubernetes version: v1.10.2", "[init] Using Authorization modes: [Node RBAC]", "
[preflight] Running pre-flight checks.", "[preflight] Starting the kubelet service", "
[certificates] Generated ca certificate and key.", "
[certificates] Generated apiserver certificate and key.", "
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.210]", "[certificates] Generated apiserver-kubelet-client certificate and key.", "
[certificates] Generated etcd/ca certificate and key.", "[
certificates] Generated etcd/server certificate and key.", "[
certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]", "[certificates] Generated etcd/peer certificate and key.", "[certificates] etcd/peer serving cert is signed for DNS names [master] and IPs [192.168.11.210]", "[certificates] Generated etcd/healthcheck-client certificate and key.", "[certificates] Generated apiserver-etcd-client certificate and key.", "[certificates] Generated sa key and public key.", "[certificates] Generated front-proxy-ca certificate and key.", "[certificates] Generated front-proxy-client certificate and key.", "[certificates] Valid certificates and keys now exist in \"/etc/kubernetes/pki\"", "
[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/admin.conf\"", "
[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/kubelet.conf\"", "
[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/controller-manager.conf\"", "[kubeconfig] Wrote KubeConfig file to disk: \"/etc/kubernetes/scheduler.conf\"", "[controlplane] Wrote Static Pod manifest for component kube-apiserver to \"/etc/kubernetes/manifests/kube-apiserver.yaml\"", "[controlplane] Wrote Static Pod manifest for component kube-controller-manager to \"/etc/kubernetes/manifests/kube-controller-manager.yaml\"", "
[controlplane] Wrote Static Pod manifest for component kube-scheduler to \"/etc/kubernetes/manifests/kube-scheduler.yaml\"", "
[etcd] Wrote Static Pod manifest for a local etcd instance to \"/etc/kubernetes/manifests/etcd.yaml\"", "[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory \"/etc/kubernetes/manifests\".",
"[init] This might take a minute or longer if the control plane images have to be pulled.", "", "Unfortunately, an error has occurred:", "\ttimed out waiting for the condition", "", "This error is likely caused by:", "\t- The kubelet is not running", "\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)", "\t- Either there is no internet connection, or imagePullPolicy is set to \"Never\",", "\t so the kubelet cannot pull or find the following control plane images:", "\t\t- k8s.gcr.io/kube-apiserver-arm:v1.10.2", "\t\t- k8s.gcr.io/kube-controller-manager-arm:v1.10.2", "\t\t- k8s.gcr.io/kube-scheduler-arm:v1.10.2", "\t\t- k8s.gcr.io/etcd-arm:3.1.12 (only if no external etcd endpoints are configured)", "", "If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:", "\t- 'systemctl status kubelet'", "\t- 'journalctl -xeu kubelet'"], "warnings": []}
PLAY RECAP *********************************************************************
master : ok=14 changed=6 unreachable=0 failed=1
node1 : ok=13 changed=5 unreachable=0 failed=0
node2 : ok=13 changed=5 unreachable=0 failed=0
This time the TASK [master : Initialize Master]
takes ages, after 30 minutes it is still running.
At least kubeadm
now has pulled images:
pi@master:~ $ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f331a5aaebe9 k8s.gcr.io/pause-arm:3.1 "/pause" 32 minutes ago Up 32 minutes k8s_POD_kube-scheduler-master_kube-system_fee77339ba5d51dbd443ec0007802495_0
3073e1cecc4e k8s.gcr.io/pause-arm:3.1 "/pause" 32 minutes ago Up 32 minutes k8s_POD_kube-controller-manager-master_kube-system_c0f627fa7d17dfec2740d80c6ffd4bd1_0
6a5d9ffa01d6 k8s.gcr.io/pause-arm:3.1 "/pause" 32 minutes ago Up 32 minutes k8s_POD_kube-apiserver-master_kube-system_bcc33f6e116b4cd918c65d622f5662ea_0
035278de9541 k8s.gcr.io/pause-arm:3.1 "/pause" 32 minutes ago Up 32 minutes k8s_POD_etcd-master_kube-system_e1e2a810fb68e16f47b9242236827e43_0
pi@master:~ $ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-scheduler-arm v1.10.2 816c40ff51c0 4 weeks ago 43.6MB
k8s.gcr.io/kube-apiserver-arm v1.10.2 c68f5521f86b 4 weeks ago 206MB
k8s.gcr.io/kube-controller-manager-arm v1.10.2 f67c023adb1b 4 weeks ago 129MB
k8s.gcr.io/etcd-arm 3.1.12 88c32b5960ff 2 months ago 178MB
k8s.gcr.io/pause-arm 3.1 e11a8cbeda86 5 months ago 374kB
Kubelet logging:
May 29 07:28:38 master kubelet[4422]: I0529 07:28:38.268370 4422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 07:28:38 master kubelet[4422]: I0529 07:28:38.583487 4422 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-arm:v1.10.2 Command:[kube-apiserver --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --insecure-port=0 --advertise-address=192.168.11.210 --client-ca-file=/etc/kubernetes/pki/ca.crt --requestheader-allowed-names=front-proxy-client --secure-port=6443 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --requestheader-extra-headers-prefix=X-Remote-Extra- --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/sa.pub --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --allow-privileged=true --authorization-mode=Node,RBAC --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:192.168.
May 29 07:28:38 master kubelet[4422]: 11.210,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 29 07:28:38 master kubelet[4422]: I0529 07:28:38.584088 4422 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-master_kube-system(bcc33f6e116b4cd918c65d622f5662ea)"
May 29 07:28:38 master kubelet[4422]: W0529 07:28:38.682719 4422 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 07:28:38 master kubelet[4422]: E0529 07:28:38.683210 4422 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 29 07:28:38 master kubelet[4422]: W0529 07:28:38.999107 4422 docker_container.go:213] Cannot create symbolic link because container log file doesn't exist!
May 29 07:28:39 master kubelet[4422]: E0529 07:28:38.999655 4422 remote_runtime.go:209] StartContainer "deaf03ef62b2e0e46a96b58fbc410833b46609d52b0a8e7b7d215a9ae490755c" from runtime service failed: rpc error: code = Unknown desc = failed to start container "deaf03ef62b2e0e46a96b58fbc410833b46609d52b0a8e7b7d215a9ae490755c": Error response from daemon: linux mounts: Could not find source mount of /etc/kubernetes/pki
May 29 07:28:39 master kubelet[4422]: E0529 07:28:38.999914 4422 kuberuntime_manager.go:733] container start failed: RunContainerError: failed to start container "deaf03ef62b2e0e46a96b58fbc410833b46609d52b0a8e7b7d215a9ae490755c": Error response from daemon: linux mounts: Could not find source mount of /etc/kubernetes/pki
May 29 07:28:39 master kubelet[4422]: E0529 07:28:39.000032 4422 pod_workers.go:186] Error syncing pod bcc33f6e116b4cd918c65d622f5662ea ("kube-apiserver-master_kube-system(bcc33f6e116b4cd918c65d622f5662ea)"), skipping: failed to "StartContainer" for "kube-apiserver" with RunContainerError: "failed to start container \"deaf03ef62b2e0e46a96b58fbc410833b46609d52b0a8e7b7d215a9ae490755c\": Error response from daemon: linux mounts: Could not find source mount of /etc/kubernetes/pki"
May 29 07:28:39 master kubelet[4422]: E0529 07:28:39.105985 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:39 master kubelet[4422]: E0529 07:28:39.162808 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:39 master kubelet[4422]: E0529 07:28:39.208357 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:40 master kubelet[4422]: E0529 07:28:40.107983 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:40 master kubelet[4422]: E0529 07:28:40.164713 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:40 master kubelet[4422]: E0529 07:28:40.210365 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:40 master kubelet[4422]: I0529 07:28:40.268180 4422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 07:28:40 master kubelet[4422]: I0529 07:28:40.582560 4422 kuberuntime_manager.go:513] Container {Name:kube-controller-manager Image:k8s.gcr.io/kube-controller-manager-arm:v1.10.2 Command:[kube-controller-manager --use-service-account-credentials=true --kubeconfig=/etc/kubernetes/controller-manager.conf --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --address=127.0.0.1 --leader-elect=true --controllers=*,bootstrapsigner,tokencleaner] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:kubeconfig ReadOnly:true MountPath:/etc/kubernetes/controller-manager.conf SubPath: MountPropagation:<nil>} {Name:flexvolume-dir ReadOnly:false MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10252,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 29 07:28:40 master kubelet[4422]: I0529 07:28:40.583110 4422 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-master_kube-system(c0f627fa7d17dfec2740d80c6ffd4bd1)"
May 29 07:28:40 master kubelet[4422]: I0529 07:28:40.585324 4422 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-master_kube-system(c0f627fa7d17dfec2740d80c6ffd4bd1)
May 29 07:28:40 master kubelet[4422]: E0529 07:28:40.585608 4422 pod_workers.go:186] Error syncing pod c0f627fa7d17dfec2740d80c6ffd4bd1 ("kube-controller-manager-master_kube-system(c0f627fa7d17dfec2740d80c6ffd4bd1)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-master_kube-system(c0f627fa7d17dfec2740d80c6ffd4bd1)"
May 29 07:28:41 master kubelet[4422]: E0529 07:28:41.112292 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:41 master kubelet[4422]: E0529 07:28:41.166833 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:41 master kubelet[4422]: E0529 07:28:41.212507 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:41 master kubelet[4422]: I0529 07:28:41.661075 4422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 07:28:41 master kubelet[4422]: I0529 07:28:41.672758 4422 kubelet_node_status.go:82] Attempting to register node master
May 29 07:28:41 master kubelet[4422]: E0529 07:28:41.674599 4422 kubelet_node_status.go:106] Unable to register node "master" with API server: Post https://192.168.11.210:6443/api/v1/nodes: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:42 master kubelet[4422]: E0529 07:28:42.114346 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:42 master kubelet[4422]: E0529 07:28:42.168839 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:42 master kubelet[4422]: E0529 07:28:42.218301 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:43 master kubelet[4422]: E0529 07:28:43.116592 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:43 master kubelet[4422]: E0529 07:28:43.171236 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:43 master kubelet[4422]: E0529 07:28:43.221497 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:43 master kubelet[4422]: I0529 07:28:43.268251 4422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 07:28:43 master kubelet[4422]: I0529 07:28:43.585021 4422 kuberuntime_manager.go:513] Container {Name:kube-scheduler Image:k8s.gcr.io/kube-scheduler-arm:v1.10.2 Command:[kube-scheduler --address=127.0.0.1 --leader-elect=true --kubeconfig=/etc/kubernetes/scheduler.conf] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:100 scale:-3} d:{Dec:<nil>} s:100m Format:DecimalSI}]} VolumeMounts:[{Name:kubeconfig ReadOnly:true MountPath:/etc/kubernetes/scheduler.conf SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10251,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 29 07:28:43 master kubelet[4422]: I0529 07:28:43.585577 4422 kuberuntime_manager.go:757] checking backoff for container "kube-scheduler" in pod "kube-scheduler-master_kube-system(fee77339ba5d51dbd443ec0007802495)"
May 29 07:28:43 master kubelet[4422]: I0529 07:28:43.586357 4422 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-master_kube-system(fee77339ba5d51dbd443ec0007802495)
May 29 07:28:43 master kubelet[4422]: E0529 07:28:43.586598 4422 pod_workers.go:186] Error syncing pod fee77339ba5d51dbd443ec0007802495 ("kube-scheduler-master_kube-system(fee77339ba5d51dbd443ec0007802495)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-master_kube-system(fee77339ba5d51dbd443ec0007802495)"
May 29 07:28:43 master kubelet[4422]: W0529 07:28:43.687510 4422 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 07:28:43 master kubelet[4422]: E0529 07:28:43.688820 4422 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 29 07:28:44 master kubelet[4422]: E0529 07:28:44.118651 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:44 master kubelet[4422]: E0529 07:28:44.173243 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:44 master kubelet[4422]: E0529 07:28:44.223851 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:45 master kubelet[4422]: E0529 07:28:45.120637 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:45 master kubelet[4422]: E0529 07:28:45.175223 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:45 master kubelet[4422]: E0529 07:28:45.225929 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:45 master kubelet[4422]: I0529 07:28:45.617483 4422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 07:28:45 master kubelet[4422]: I0529 07:28:45.617586 4422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 07:28:45 master kubelet[4422]: W0529 07:28:45.633523 4422 pod_container_deletor.go:77] Container "bcc01aeea930dbc773a164b1e6e11e35b4486bee7c870a69d575c278cbde49a2" not found in pod's containers
May 29 07:28:45 master kubelet[4422]: W0529 07:28:45.636308 4422 status_manager.go:461] Failed to get status for pod "kube-apiserver-master_kube-system(bcc33f6e116b4cd918c65d622f5662ea)": Get https://192.168.11.210:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-master: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:45 master kubelet[4422]: E0529 07:28:45.852656 4422 event.go:209] Unable to write event: 'Patch https://192.168.11.210:6443/api/v1/namespaces/default/events/master.153304e3a45b638a: dial tcp 192.168.11.210:6443: getsockopt: connection refused' (may retry after sleeping)
May 29 07:28:45 master kubelet[4422]: I0529 07:28:45.935966 4422 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-arm:v1.10.2 Command:[kube-apiserver --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --insecure-port=0 --advertise-address=192.168.11.210 --client-ca-file=/etc/kubernetes/pki/ca.crt --requestheader-allowed-names=front-proxy-client --secure-port=6443 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --requestheader-extra-headers-prefix=X-Remote-Extra- --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/sa.pub --tls-private-key-file=/etc/kubernetes/pki/apiserver.key --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --allow-privileged=true --authorization-mode=Node,RBAC --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:192.168.
May 29 07:28:45 master kubelet[4422]: 11.210,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 29 07:28:45 master kubelet[4422]: I0529 07:28:45.938308 4422 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-master_kube-system(bcc33f6e116b4cd918c65d622f5662ea)"
May 29 07:28:45 master kubelet[4422]: I0529 07:28:45.939483 4422 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-master_kube-system(bcc33f6e116b4cd918c65d622f5662ea)
May 29 07:28:45 master kubelet[4422]: E0529 07:28:45.939777 4422 pod_workers.go:186] Error syncing pod bcc33f6e116b4cd918c65d622f5662ea ("kube-apiserver-master_kube-system(bcc33f6e116b4cd918c65d622f5662ea)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-master_kube-system(bcc33f6e116b4cd918c65d622f5662ea)"
May 29 07:28:46 master kubelet[4422]: E0529 07:28:46.122660 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:46 master kubelet[4422]: E0529 07:28:46.177201 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:46 master kubelet[4422]: E0529 07:28:46.227945 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:46 master kubelet[4422]: E0529 07:28:46.853642 4422 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node "master" not found
May 29 07:28:47 master kubelet[4422]: E0529 07:28:47.126719 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:47 master kubelet[4422]: E0529 07:28:47.187440 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:47 master kubelet[4422]: E0529 07:28:47.230503 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:47 master kubelet[4422]: I0529 07:28:47.268348 4422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 07:28:47 master kubelet[4422]: I0529 07:28:47.583427 4422 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-arm:3.1.12 Command:[etcd --peer-client-cert-auth=true --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --key-file=/etc/kubernetes/pki/etcd/server.key --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --listen-client-urls=https://127.0.0.1:2379 --advertise-client-urls=https://127.0.0.1:2379 --client-cert-auth=true --data-dir=/var/lib/etcd --cert-file=/etc/kubernetes/pki/etcd/server.crt --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 29 07:28:47 master kubelet[4422]: I0529 07:28:47.585030 4422 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-master_kube-system(e1e2a810fb68e16f47b9242236827e43)"
May 29 07:28:47 master kubelet[4422]: I0529 07:28:47.585863 4422 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=etcd pod=etcd-master_kube-system(e1e2a810fb68e16f47b9242236827e43)
May 29 07:28:47 master kubelet[4422]: E0529 07:28:47.586173 4422 pod_workers.go:186] Error syncing pod e1e2a810fb68e16f47b9242236827e43 ("etcd-master_kube-system(e1e2a810fb68e16f47b9242236827e43)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=etcd pod=etcd-master_kube-system(e1e2a810fb68e16f47b9242236827e43)"
May 29 07:28:48 master kubelet[4422]: E0529 07:28:48.130636 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:48 master kubelet[4422]: E0529 07:28:48.190088 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:48 master kubelet[4422]: E0529 07:28:48.232691 4422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:48 master kubelet[4422]: I0529 07:28:48.675438 4422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 07:28:48 master kubelet[4422]: I0529 07:28:48.688337 4422 kubelet_node_status.go:82] Attempting to register node master
May 29 07:28:48 master kubelet[4422]: E0529 07:28:48.690607 4422 kubelet_node_status.go:106] Unable to register node "master" with API server: Post https://192.168.11.210:6443/api/v1/nodes: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 07:28:48 master kubelet[4422]: W0529 07:28:48.693928 4422 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 07:28:48 master kubelet[4422]: E0529 07:28:48.696529 4422 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
We should https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aissue+ for related issues.
to be continued...
Did you reinstall Docker 18.05? I had to explicitly install 18.04. I can post my modification I found you want, it is similar to how you did the Kubernetes version.
Thanks for your feedback jmreicha!
The following Docker
versions are available:
$ sudo apt-cache policy docker-ce
docker-ce:
Installed: (none)
Candidate: 18.05.0~ce~3-0~raspbian
Version table:
18.05.0~ce~3-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
18.04.0~ce~3-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
18.03.1~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
18.03.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
18.02.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
18.01.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
17.12.1~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
17.12.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
17.11.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
17.10.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
17.09.1~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
17.09.0~ce-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
To fix the Docker
version to 18.04
I edditted the file roles/kubeadm/files/get-docker.sh and added =18.04.0~ce~3-0~raspbian
to the line below:
$sh_c 'apt-get install -y -qq --no-install-recommends docker-ce=18.04.0~ce~3-0~raspbian >/dev/null'
After I run kubeadm reset
and removing the packages docker-ce
, kubeadm
, kubectl
and kubelet
and a reboot, I did re-run the the cluster.yml
playbook:
pi@ansible-node ~/git/rak8s (testing) $ ansible-playbook cluster.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [master]
ok: [node2]
ok: [node1]
TASK [common : Enabling cgroup options at boot] ********************************
ok: [node2]
ok: [master]
ok: [node1]
TASK [common : apt-get update] *************************************************
ok: [node2]
ok: [master]
ok: [node1]
TASK [common : apt-get upgrade] ************************************************
ok: [master]
ok: [node2]
ok: [node1]
TASK [common : Reboot] *********************************************************
skipping: [master]
skipping: [node1]
skipping: [node2]
TASK [common : Wait for Reboot] ************************************************
skipping: [master]
skipping: [node1]
skipping: [node2]
TASK [kubeadm : Disable Swap] **************************************************
changed: [master]
changed: [node1]
changed: [node2]
TASK [kubeadm : Determine if docker is installed] ******************************
ok: [node2]
ok: [master]
ok: [node1]
TASK [kubeadm : Run Docker Install Script] *************************************
changed: [master]
changed: [node2]
changed: [node1]
TASK [kubeadm : Pass bridged IPv4 traffic to iptables' chains] *****************
ok: [node2]
ok: [master]
ok: [node1]
TASK [kubeadm : Install apt-transport-https] ***********************************
ok: [master]
ok: [node1]
ok: [node2]
TASK [kubeadm : Add Google Cloud Repo Key] *************************************
changed: [node1]
[WARNING]: Consider using get_url or uri module rather than running curl
changed: [node2]
changed: [master]
TASK [kubeadm : Add Kubernetes to Available apt Sources] ***********************
ok: [master]
ok: [node1]
ok: [node2]
TASK [kubeadm : apt-get update] ************************************************
changed: [master]
changed: [node1]
changed: [node2]
TASK [kubeadm : Install k8s Y'all] *********************************************
changed: [master] => (item=[u'kubelet=1.10.2-00', u'kubeadm=1.10.2-00', u'kubectl=1.10.2-00'])
changed: [node1] => (item=[u'kubelet=1.10.2-00', u'kubeadm=1.10.2-00', u'kubectl=1.10.2-00'])
changed: [node2] => (item=[u'kubelet=1.10.2-00', u'kubeadm=1.10.2-00', u'kubectl=1.10.2-00'])
PLAY [master] ******************************************************************
TASK [master : Reset Kubernetes Master] ****************************************
changed: [master]
TASK [master : Initialize Master] **********************************************
changed: [master]
TASK [master : Create Kubernetes config directory] *****************************
changed: [master]
TASK [master : Copy admin.conf to config directory] ****************************
changed: [master]
TASK [master : Join Kubernetes Cluster] ****************************************
changed: [master]
TASK [master : Install Weave (Networking)] *************************************
changed: [master]
TASK [master : Poke kubelet] ***************************************************
changed: [master]
TASK [dashboard : Install k8s Dashboard] ***************************************
changed: [master]
TASK [dashboard : Configure Dashboard Access] **********************************
changed: [master]
TASK [dashboard : Force Rebuild Dashboard Pods] ********************************
changed: [master]
TASK [dashboard : Fetch kubeconfig file] ***************************************
changed: [master]
PLAY [all:!master] *************************************************************
TASK [workers : Reset Kubernetes] **********************************************
changed: [node2]
changed: [node1]
TASK [workers : Join Kubernetes Cluster] ***************************************
changed: [node1]
changed: [node2]
TASK [workers : Poke kubelet] **************************************************
changed: [node1]
changed: [node2]
PLAY RECAP *********************************************************************
master : ok=24 changed=16 unreachable=0 failed=0
node1 : ok=16 changed=8 unreachable=0 failed=0
node2 : ok=16 changed=8 unreachable=0 failed=0
The playbook install is successful, but we are not yet there:
pi@master:~ $ kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port?
pi@master:~ $ kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/arm"} The connection to the server localhost:8080 was refused - did you specify the right host or port?
pi@master:~ $ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0011bdfdef40 k8s.gcr.io/pause-arm:3.1 "/pause" 4 seconds ago Up Less than a second k8s_POD_kubernetes-dashboard-74959b9d6c-l7b9w_kube-system_79ed1871-6359-11e8-9788-b827ebcfd0f3_14 40756e498122 k8s.gcr.io/pause-arm:3.1 "/pause" 4 seconds ago Up 1 second k8s_POD_kube-dns-686d6fb9c-bnfxd_kube-system_665a87f2-6359-11e8-9788-b827ebcfd0f3_13 8d3735fa20f7 10ead2ac9c17 "/home/weave/launch.…" 11 seconds ago Up 9 seconds k8s_weave_weave-net-6nhzg_kube-system_6fe4eab0-6359-11e8-9788-b827ebcfd0f3_2 dc7a2f33d556 e214242c20cf "/usr/bin/weave-npc" About a minute ago Up About a minute k8s_weave-npc_weave-net-6nhzg_kube-system_6fe4eab0-6359-11e8-9788-b827ebcfd0f3_1 37571588241a 3fb95685d2d5 "/usr/local/bin/kube…" About a minute ago Up About a minute k8s_kube-proxy_kube-proxy-z8444_kube-system_667681fe-6359-11e8-9788-b827ebcfd0f3_1 885c58962179 k8s.gcr.io/pause-arm:3.1 "/pause" About a minute ago Up About a minute k8s_POD_weave-net-6nhzg_kube-system_6fe4eab0-6359-11e8-9788-b827ebcfd0f3_1 62ff8bfa8a22 k8s.gcr.io/pause-arm:3.1 "/pause" About a minute ago Up About a minute k8s_POD_kube-proxy-z8444_kube-system_667681fe-6359-11e8-9788-b827ebcfd0f3_1 e623ccb4ac20 c68f5521f86b "kube-apiserver --se…" 2 minutes ago Up 2 minutes k8s_kube-apiserver_kube-apiserver-master_kube-system_4a80e6b64908277c12ecbe4702c4b55e_1 9fd73ae9151d 88c32b5960ff "etcd --listen-clien…" 2 minutes ago Up 2 minutes k8s_etcd_etcd-master_kube-system_9936956001867618ccb259bfb63e41e3_1 46eb840670bc 816c40ff51c0 "kube-scheduler --le…" 2 minutes ago Up 2 minutes k8s_kube-scheduler_kube-scheduler-master_kube-system_16d8548b01467686e2efa9273c728a2b_1 a99d457159cf f67c023adb1b "kube-controller-man…" 2 minutes ago Up 2 minutes k8s_kube-controller-manager_kube-controller-manager-master_kube-system_e04879cbb09b67ba6bd7c93f33a6556b_1 e059dc09f8d4 k8s.gcr.io/pause-arm:3.1 "/pause" 4 minutes ago Up 2 minutes k8s_POD_kube-controller-manager-master_kube-system_e04879cbb09b67ba6bd7c93f33a6556b_1 271b2f272db7 k8s.gcr.io/pause-arm:3.1 "/pause" 4 minutes ago Up 2 minutes k8s_POD_kube-apiserver-master_kube-system_4a80e6b64908277c12ecbe4702c4b55e_1 273a71bfd615 k8s.gcr.io/pause-arm:3.1 "/pause" 4 minutes ago Up 2 minutes k8s_POD_etcd-master_kube-system_9936956001867618ccb259bfb63e41e3_1 37262f350810 k8s.gcr.io/pause-arm:3.1 "/pause" 4 minutes ago Up 2 minutes k8s_POD_kube-scheduler-master_kube-system_16d8548b01467686e2efa9273c728a2b_1
pi@node1:~ $
Message from syslogd@node1 at May 29 18:03:58 ...
kernel:[ 1152.893132] Internal error: Oops: 80000007 [#1] SMP ARM
Message from syslogd@node1 at May 29 18:03:58 ...
kernel:[ 1152.909990] Process weaver (pid: 4844, stack limit = 0xb9372210)
Message from syslogd@node1 at May 29 18:03:58 ...
kernel:[ 1152.910681] Stack: (0xb93739f0 to 0xb9374000)
Message from syslogd@node1 at May 29 18:03:58 ...
kernel:[ 1152.911371] 39e0: 00000000 00000000 d20ba8c0 b9373a88
Message from syslogd@node1 at May 29 18:03:58 ...
kernel:[ 1152.912693] 3a00: 0000801a 000071b6 bc3eec90 bc3eec58 b9373d2c 7f76ead0 00000001 b9373a5c
packet_write_wait: Connection to 192.168.11.211 port 22: Broken pipe
Now Docker
18.04 is installed:
$ sudo apt-cache policy docker-ce
docker-ce:
Installed: 18.04.0~ce~3-0~raspbian
Candidate: 18.05.0~ce~3-0~raspbian
Version table:
18.05.0~ce~3-0~raspbian 500
500 https://download.docker.com/linux/raspbian stretch/edge armhf Packages
*** 18.04.0~ce~3-0~raspbian 500
Unfortunately the cluster doesn't work. The master and nodes hang from time to time and containers are re-restarted. Needs more investigation....
That kernel version looks suspicious. What OS and kernel are you using? I used the Raspbian Stretch lite image with the 4.14 kernel.
I used also raspbian stretch lite (release date 18-04-2018, kernel 4.14).
pi@ansible-node ~/git/rak8s (testing) $ ansible all -m shell -a 'sudo uname -a'
node1 | SUCCESS | rc=0 >>
Linux node1 4.14.34-v7+ #1110 SMP Mon Apr 16 15:18:51 BST 2018 armv7l GNU/Linux
master | SUCCESS | rc=0 >>
Linux master 4.14.34-v7+ #1110 SMP Mon Apr 16 15:18:51 BST 2018 armv7l GNU/Linux
node2 | SUCCESS | rc=0 >>
Linux node1 4.14.34-v7+ #1110 SMP Mon Apr 16 15:18:51 BST 2018 armv7l GNU/Linux
Probably not the kernel then. It looks like Weave is having problems? Are you rebooting the nodes after your kubeadm reset
?
Successful deployment with kubernetes 1.10.2 & docker 18.04 worked, but after some minutes the API is no longer reachable.
Note: I tried several combinations of Kubernetes (1.10, 1.10.1 1.10.2 and 1.10.3) in combination with Docker (18-04, 18-03, 18-02, 18-01, 17-12). None relay worked. kubernetes 1.10.2 and 18-04 looks like the best, because it is the last know good working combination.
Cleanup (kubeadm reset
, removing docker, kubeadm, kubelet and kubectl packages) & reboot:
pi@ansible-node ~/git/rak8s (testing) $ ansible all -m shell -a 'sudo kubeadm reset'
node1 | SUCCESS | rc=0 >>
[preflight] Running pre-flight checks.
[reset] Stopping the kubelet service.
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers.
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
master | SUCCESS | rc=0 >>
[preflight] Running pre-flight checks.
[reset] Stopping the kubelet service.
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
node2 | SUCCESS | rc=0 >>
[preflight] Running pre-flight checks.
[reset] Stopping the kubelet service.
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers.
[reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
pi@ansible-node ~/git/rak8s (testing) $ ansible all -m shell -a 'sudo apt-get purge kubeadm kubectl kubelet docker-ce -y'
master | SUCCESS | rc=0 >>
Reading package lists...
Building dependency tree...
Reading state information...
The following packages were automatically installed and are no longer required:
ebtables ethtool kubernetes-cni libltdl7 socat
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
docker-ce* kubeadm* kubectl* kubelet*
0 upgraded, 0 newly installed, 4 to remove and 0 not upgraded.
After this operation, 425 MB disk space will be freed.
(Reading database ... 38606 files and directories currently installed.)
Removing docker-ce (18.01.0~ce-0~raspbian) ...
Removing kubeadm (1.10.0-00) ...
Removing kubectl (1.10.0-00) ...
Removing kubelet (1.10.0-00) ...
Processing triggers for man-db (2.7.6.1-2) ...
(Reading database ... 38391 files and directories currently installed.)
Purging configuration files for docker-ce (18.01.0~ce-0~raspbian) ...
Purging configuration files for kubelet (1.10.0-00) ...
Purging configuration files for kubeadm (1.10.0-00) ...
Processing triggers for systemd (232-25+deb9u2) ...
node2 | SUCCESS | rc=0 >>
Reading package lists...
Building dependency tree...
Reading state information...
The following packages were automatically installed and are no longer required:
ebtables ethtool kubernetes-cni libltdl7 socat
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
docker-ce* kubeadm* kubectl* kubelet*
0 upgraded, 0 newly installed, 4 to remove and 0 not upgraded.
After this operation, 425 MB disk space will be freed.
(Reading database ... 38606 files and directories currently installed.)
Removing docker-ce (18.01.0~ce-0~raspbian) ...
Removing kubeadm (1.10.0-00) ...
Removing kubectl (1.10.0-00) ...
Removing kubelet (1.10.0-00) ...
Processing triggers for man-db (2.7.6.1-2) ...
(Reading database ... 38391 files and directories currently installed.)
Purging configuration files for docker-ce (18.01.0~ce-0~raspbian) ...
Purging configuration files for kubelet (1.10.0-00) ...
Purging configuration files for kubeadm (1.10.0-00) ...
Processing triggers for systemd (232-25+deb9u2) ...
node1 | SUCCESS | rc=0 >>
Reading package lists...
Building dependency tree...
Reading state information...
The following packages were automatically installed and are no longer required:
ebtables ethtool kubernetes-cni libltdl7 socat
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
docker-ce* kubeadm* kubectl* kubelet*
0 upgraded, 0 newly installed, 4 to remove and 0 not upgraded.
After this operation, 425 MB disk space will be freed.
(Reading database ... 38606 files and directories currently installed.)
Removing docker-ce (18.01.0~ce-0~raspbian) ...
Removing kubeadm (1.10.0-00) ...
Removing kubectl (1.10.0-00) ...
Removing kubelet (1.10.0-00) ...
Processing triggers for man-db (2.7.6.1-2) ...
(Reading database ... 38391 files and directories currently installed.)
Purging configuration files for docker-ce (18.01.0~ce-0~raspbian) ...
Purging configuration files for kubelet (1.10.0-00) ...
Purging configuration files for kubeadm (1.10.0-00) ...
Processing triggers for systemd (232-25+deb9u2) ...
pi@ansible-node ~/git/rak8s (testing) $ ansible all -m shell -a 'sudo ls -l /var/lib/kubelet'
master | SUCCESS | rc=0 >>
total 0
node2 | SUCCESS | rc=0 >>
total 0
node1 | SUCCESS | rc=0 >>
total 0
pi@ansible-node ~/git/rak8s (testing) $ ansible all -m shell -a 'sudo reboot'
master | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh",
"unreachable": true
}
node1 | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh",
"unreachable": true
}
node2 | UNREACHABLE! => {
"changed": false,
"msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh",
"unreachable": true
}
Re-run with:
pi@ansible-node ~/git/rak8s (testing) $ ansible-playbook cluster.yml
PLAY [all] *********************************************************************
TASK [setup] *******************************************************************
ok: [node2]
ok: [node1]
ok: [master]
TASK [common : Enabling cgroup options at boot] ********************************
ok: [node2]
ok: [master]
ok: [node1]
TASK [common : apt-get update] *************************************************
ok: [node2]
ok: [master]
ok: [node1]
TASK [common : apt-get upgrade] ************************************************
ok: [master]
ok: [node2]
ok: [node1]
TASK [common : Reboot] *********************************************************
skipping: [master]
skipping: [node1]
skipping: [node2]
TASK [common : Wait for Reboot] ************************************************
skipping: [master]
skipping: [node1]
skipping: [node2]
TASK [kubeadm : Disable Swap] **************************************************
changed: [master]
changed: [node1]
changed: [node2]
TASK [kubeadm : Determine if docker is installed] ******************************
ok: [node1]
ok: [master]
ok: [node2]
TASK [kubeadm : Run Docker Install Script] *************************************
changed: [master]
changed: [node2]
changed: [node1]
TASK [kubeadm : Pass bridged IPv4 traffic to iptables' chains] *****************
ok: [master]
ok: [node1]
ok: [node2]
TASK [kubeadm : Install apt-transport-https] ***********************************
ok: [master]
ok: [node1]
ok: [node2]
TASK [kubeadm : Add Google Cloud Repo Key] *************************************
changed: [node2]
[WARNING]: Consider using get_url or uri module rather than running curl
changed: [master]
changed: [node1]
TASK [kubeadm : Add Kubernetes to Available apt Sources] ***********************
ok: [node1]
ok: [master]
ok: [node2]
TASK [kubeadm : apt-get update] ************************************************
changed: [node2]
changed: [master]
changed: [node1]
TASK [kubeadm : Install k8s Y'all] *********************************************
changed: [master] => (item=[u'kubelet=1.10.2-00', u'kubeadm=1.10.2-00', u'kubectl=1.10.2-00'])
changed: [node1] => (item=[u'kubelet=1.10.2-00', u'kubeadm=1.10.2-00', u'kubectl=1.10.2-00'])
changed: [node2] => (item=[u'kubelet=1.10.2-00', u'kubeadm=1.10.2-00', u'kubectl=1.10.2-00'])
PLAY [master] ******************************************************************
TASK [master : Reset Kubernetes Master] ****************************************
changed: [master]
TASK [master : Initialize Master] **********************************************
changed: [master]
TASK [master : Create Kubernetes config directory] *****************************
ok: [master]
TASK [master : Copy admin.conf to config directory] ****************************
changed: [master]
TASK [master : Join Kubernetes Cluster] ****************************************
changed: [master]
TASK [master : Install Weave (Networking)] *************************************
changed: [master]
TASK [master : Poke kubelet] ***************************************************
changed: [master]
TASK [dashboard : Install k8s Dashboard] ***************************************
changed: [master]
TASK [dashboard : Configure Dashboard Access] **********************************
changed: [master]
TASK [dashboard : Force Rebuild Dashboard Pods] ********************************
changed: [master]
TASK [dashboard : Fetch kubeconfig file] ***************************************
changed: [master]
PLAY [all:!master] *************************************************************
TASK [workers : Reset Kubernetes] **********************************************
changed: [node1]
changed: [node2]
TASK [workers : Join Kubernetes Cluster] ***************************************
changed: [node1]
changed: [node2]
TASK [workers : Poke kubelet] **************************************************
changed: [node1]
changed: [node2]
PLAY RECAP *********************************************************************
master : ok=24 changed=15 unreachable=0 failed=0
node1 : ok=16 changed=8 unreachable=0 failed=0
node2 : ok=16 changed=8 unreachable=0 failed=0
pi@ansible-node ~/git/rak8s (testing) $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 2m v1.10.2
node1 NotReady <none> 52s v1.10.2
node2 NotReady <none> 28s v1.10.2
Successful, but some minutes later the API is unreachable:
pi@master:~ $ uptime
22:12:32 up 13 min, 1 user, load average: 1.20, 1.27, 0.87
pi@master:~ $ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
41c26f491dee 3fb95685d2d5 "/usr/local/bin/kube…" 10 minutes ago Up 10 minutes k8s_kube-proxy_kube-proxy-w9c9j_kube-system_80327b19-637a-11e8-be1d-b827ebcfd0f3_1
b5209ba66ccc k8s.gcr.io/pause-arm:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_weave-net-9x7x8_kube-system_808bf2e8-637a-11e8-be1d-b827ebcfd0f3_0
73734d19e288 k8s.gcr.io/pause-arm:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_kube-proxy-w9c9j_kube-system_80327b19-637a-11e8-be1d-b827ebcfd0f3_1
988d96841800 f67c023adb1b "kube-controller-man…" 11 minutes ago Up 11 minutes k8s_kube-controller-manager_kube-controller-manager-master_kube-system_deff6225ab733c0edb8f1f173d199971_1
15fa42cc8a26 88c32b5960ff "etcd --key-file=/et…" 11 minutes ago Up 11 minutes k8s_etcd_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1
8f41bc539bd6 c68f5521f86b "kube-apiserver --ad…" 11 minutes ago Up 11 minutes k8s_kube-apiserver_kube-apiserver-master_kube-system_e8bbd63fe665e75bedba1f9bfcf885b6_1
79c8d70cbbde k8s.gcr.io/pause-arm:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-controller-manager-master_kube-system_deff6225ab733c0edb8f1f173d199971_1
a40e07b5bed6 k8s.gcr.io/pause-arm:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1
ad51a5e12a67 k8s.gcr.io/pause-arm:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-apiserver-master_kube-system_e8bbd63fe665e75bedba1f9bfcf885b6_1
f1cb2513e684 816c40ff51c0 "kube-scheduler --ku…" 11 minutes ago Up 11 minutes k8s_kube-scheduler_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1
9562da9e7b28 k8s.gcr.io/pause-arm:3.1 "/pause" 11 minutes ago Up 11 minutes k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1
pi@master:~ $ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Kubelet logging master:
-- Logs begin at Tue 2018-05-29 21:43:45 CEST, end at Tue 2018-05-29 22:16:35 CEST. --
May 29 21:52:25 master systemd[1]: Started kubelet: The Kubernetes Node Agent.
May 29 21:52:27 master kubelet[314]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 29 21:52:27 master kubelet[314]: Flag --allow-privileged has been deprecated, will be removed in a future version
May 29 21:52:27 master kubelet[314]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 29 21:52:27 master kubelet[314]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 29 21:52:27 master kubelet[314]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 29 21:52:27 master kubelet[314]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
May 29 21:52:27 master kubelet[314]: Flag --cadvisor-port has been deprecated, The default will change to 0 (disabled) in 1.12, and the cadvisor port will be removed entirely in 1.13
May 29 21:52:27 master kubelet[314]: I0529 21:52:27.842311 314 feature_gate.go:226] feature gates: &{{} map[]}
May 29 21:52:27 master kubelet[314]: W0529 21:52:27.913537 314 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 21:52:27 master kubelet[314]: W0529 21:52:27.962877 314 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
May 29 21:52:27 master kubelet[314]: I0529 21:52:27.963109 314 server.go:376] Version: v1.10.2
May 29 21:52:27 master kubelet[314]: I0529 21:52:27.963367 314 feature_gate.go:226] feature gates: &{{} map[]}
May 29 21:52:27 master kubelet[314]: I0529 21:52:27.963752 314 plugins.go:89] No cloud provider specified.
May 29 21:52:28 master kubelet[314]: I0529 21:52:28.006724 314 certificate_store.go:117] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
May 29 21:52:48 master kubelet[314]: E0529 21:52:48.834657 314 machine.go:194] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no such file or directory
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.866229 314 server.go:613] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.869727 314 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.869890 314 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.870723 314 container_manager_linux.go:266] Creating device plugin manager: true
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.870987 314 state_mem.go:36] [cpumanager] initializing new in-memory state store
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.873454 314 state_mem.go:84] [cpumanager] updated default cpuset: ""
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.873598 314 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.878562 314 kubelet.go:272] Adding pod path: /etc/kubernetes/manifests
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.878727 314 kubelet.go:297] Watching apiserver
May 29 21:52:49 master kubelet[314]: E0529 21:52:49.906340 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:49 master kubelet[314]: E0529 21:52:49.907466 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:49 master kubelet[314]: E0529 21:52:49.909326 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:49 master kubelet[314]: W0529 21:52:49.930988 314 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.947522 314 kubelet.go:556] Hairpin mode set to "hairpin-veth"
May 29 21:52:49 master kubelet[314]: W0529 21:52:49.947975 314 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.948129 314 client.go:75] Connecting to docker on unix:///var/run/docker.sock
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.953336 314 client.go:104] Start docker client with request timeout=2m0s
May 29 21:52:49 master kubelet[314]: W0529 21:52:49.964856 314 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 21:52:49 master kubelet[314]: W0529 21:52:49.979017 314 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
May 29 21:52:49 master kubelet[314]: W0529 21:52:49.979559 314 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 21:52:49 master kubelet[314]: I0529 21:52:49.979728 314 docker_service.go:244] Docker cri networking managed by cni
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.014569 314 docker_service.go:249] Docker Info: &{ID:WVPW:JQ7M:GV7D:RDUA:6KKD:F2RR:34FW:J2AM:MB2A:DNKA:MVVT:K2EE Containers:11 ContainersRunning:11 ContainersPaused:0 ContainersStopped:0 Images:9 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:false CPUCfsQuota:false CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:20 OomKillDisable:true NGoroutines:34 SystemTime:2018-05-29T21:52:49.985068344+02:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.14.34-v7+ OperatingSystem:Raspbian GNU/Linux 9 (stretch) OSType:linux Architecture:armv7l IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0x155d2540 NCPU:4 MemTotal:1024188416 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:master Labels:[] ExperimentalBuild:false ServerVersion:18.04.0-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:773c489c9c1b21a6d78b5c538cd395416ec50f88 Expected:773c489c9c1b21a6d78b5c538cd395416ec50f88} RuncCommit:{ID:4fc53a81fb7c994640722ac585fa9ca548971871 Expected:4fc53a81fb7c994640722ac585fa9ca548971871} InitCommit:{ID:949e6fa Expected:949e6fa} SecurityOptions:[name=seccomp,profile=default]}
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.014984 314 docker_service.go:262] Setting cgroupDriver to cgroupfs
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.070219 314 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.100022 314 kuberuntime_manager.go:186] Container runtime docker initialized, version: 18.04.0-ce, apiVersion: 1.37.0
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.111212 314 csi_plugin.go:61] kubernetes.io/csi: plugin initializing...
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.134108 314 server.go:129] Starting to listen on 0.0.0.0:10250
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.147210 314 server.go:299] Adding debug handlers to kubelet server.
May 29 21:52:50 master kubelet[314]: E0529 21:52:50.150317 314 event.go:209] Unable to write event: 'Post https://192.168.11.210:6443/api/v1/namespaces/default/events: dial tcp 192.168.11.210:6443: getsockopt: connection refused' (may retry after sleeping)
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.134978 314 server.go:944] Started kubelet
May 29 21:52:50 master kubelet[314]: E0529 21:52:50.141080 314 kubelet.go:1277] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.141398 314 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.154039 314 status_manager.go:140] Starting to sync pod status with apiserver
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.154198 314 kubelet.go:1777] Starting kubelet main sync loop.
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.154361 314 kubelet.go:1794] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.155411 314 volume_manager.go:247] Starting Kubelet Volume Manager
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.155537 314 desired_state_of_world_populator.go:129] Desired state populator starts to run
May 29 21:52:50 master kubelet[314]: W0529 21:52:50.179457 314 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 21:52:50 master kubelet[314]: E0529 21:52:50.181321 314 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.257988 314 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.258386 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.270711 314 kubelet_node_status.go:82] Attempting to register node master
May 29 21:52:50 master kubelet[314]: E0529 21:52:50.273505 314 kubelet_node_status.go:106] Unable to register node "master" with API server: Post https://192.168.11.210:6443/api/v1/nodes: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.458316 314 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.473807 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.480668 314 kubelet_node_status.go:82] Attempting to register node master
May 29 21:52:50 master kubelet[314]: E0529 21:52:50.482061 314 kubelet_node_status.go:106] Unable to register node "master" with API server: Post https://192.168.11.210:6443/api/v1/nodes: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.858529 314 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.882311 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:50 master kubelet[314]: I0529 21:52:50.889667 314 kubelet_node_status.go:82] Attempting to register node master
May 29 21:52:50 master kubelet[314]: E0529 21:52:50.891032 314 kubelet_node_status.go:106] Unable to register node "master" with API server: Post https://192.168.11.210:6443/api/v1/nodes: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:50 master kubelet[314]: E0529 21:52:50.907799 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:50 master kubelet[314]: E0529 21:52:50.908828 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:50 master kubelet[314]: E0529 21:52:50.910588 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:51 master kubelet[314]: I0529 21:52:51.659846 314 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
May 29 21:52:51 master kubelet[314]: I0529 21:52:51.691430 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:51 master kubelet[314]: I0529 21:52:51.703329 314 kubelet_node_status.go:82] Attempting to register node master
May 29 21:52:51 master kubelet[314]: E0529 21:52:51.705103 314 kubelet_node_status.go:106] Unable to register node "master" with API server: Post https://192.168.11.210:6443/api/v1/nodes: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:51 master kubelet[314]: E0529 21:52:51.910316 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:51 master kubelet[314]: E0529 21:52:51.910427 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:51 master kubelet[314]: E0529 21:52:51.913664 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:52 master kubelet[314]: W0529 21:52:52.269668 314 nvidia.go:74] Error reading "/sys/bus/pci/devices/": open /sys/bus/pci/devices/: no such file or directory
May 29 21:52:52 master kubelet[314]: I0529 21:52:52.657664 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:52 master kubelet[314]: I0529 21:52:52.664150 314 cpu_manager.go:155] [cpumanager] starting with none policy
May 29 21:52:52 master kubelet[314]: I0529 21:52:52.664199 314 cpu_manager.go:156] [cpumanager] reconciling every 10s
May 29 21:52:52 master kubelet[314]: I0529 21:52:52.664237 314 policy_none.go:42] [cpumanager] none policy: Start
May 29 21:52:52 master kubelet[314]: Starting Device Plugin manager
May 29 21:52:52 master kubelet[314]: E0529 21:52:52.701815 314 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node "master" not found
May 29 21:52:52 master kubelet[314]: E0529 21:52:52.911762 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:52 master kubelet[314]: E0529 21:52:52.913598 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:52 master kubelet[314]: E0529 21:52:52.915911 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:53 master kubelet[314]: W0529 21:52:53.261381 314 pod_container_deletor.go:77] Container "f7157981e06580c7f452ad1b194c8bcc17798f47030b16bfab506db99cf8a24c" not found in pod's containers
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.262999 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.300685 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.301360 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.305758 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.323152 314 kubelet_node_status.go:82] Attempting to register node master
May 29 21:52:53 master kubelet[314]: E0529 21:52:53.325344 314 kubelet_node_status.go:106] Unable to register node "master" with API server: Post https://192.168.11.210:6443/api/v1/nodes: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:53 master kubelet[314]: W0529 21:52:53.327779 314 status_manager.go:461] Failed to get status for pod "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)": Get https://192.168.11.210:6443/api/v1/namespaces/kube-system/pods/etcd-master: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.333783 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.334116 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: W0529 21:52:53.354378 314 status_manager.go:461] Failed to get status for pod "kube-apiserver-master_kube-system(e8bbd63fe665e75bedba1f9bfcf885b6)": Get https://192.168.11.210:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-master: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:53 master kubelet[314]: E0529 21:52:53.356423 314 event.go:209] Unable to write event: 'Post https://192.168.11.210:6443/api/v1/namespaces/default/events: dial tcp 192.168.11.210:6443: getsockopt: connection refused' (may retry after sleeping)
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.367966 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.370398 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.377329 314 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/deff6225ab733c0edb8f1f173d199971-kubeconfig") pod "kube-controller-manager-master" (UID: "deff6225ab733c0edb8f1f173d199971")
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.377670 314 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/deff6225ab733c0edb8f1f173d199971-flexvolume-dir") pod "kube-controller-manager-master" (UID: "deff6225ab733c0edb8f1f173d199971")
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.378144 314 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/d8b5a9db912cd817b3117cd5313ef6d6-etcd-data") pod "etcd-master" (UID: "d8b5a9db912cd817b3117cd5313ef6d6")
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.378422 314 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/d8b5a9db912cd817b3117cd5313ef6d6-etcd-certs") pod "etcd-master" (UID: "d8b5a9db912cd817b3117cd5313ef6d6")
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.378906 314 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/e8bbd63fe665e75bedba1f9bfcf885b6-ca-certs") pod "kube-apiserver-master" (UID: "e8bbd63fe665e75bedba1f9bfcf885b6")
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.379189 314 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/e8bbd63fe665e75bedba1f9bfcf885b6-k8s-certs") pod "kube-apiserver-master" (UID: "e8bbd63fe665e75bedba1f9bfcf885b6")
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.379435 314 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/deff6225ab733c0edb8f1f173d199971-k8s-certs") pod "kube-controller-manager-master" (UID: "deff6225ab733c0edb8f1f173d199971")
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.379676 314 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/deff6225ab733c0edb8f1f173d199971-ca-certs") pod "kube-controller-manager-master" (UID: "deff6225ab733c0edb8f1f173d199971")
May 29 21:52:53 master kubelet[314]: W0529 21:52:53.404030 314 status_manager.go:461] Failed to get status for pod "kube-controller-manager-master_kube-system(deff6225ab733c0edb8f1f173d199971)": Get https://192.168.11.210:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-master: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.408983 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.410429 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.419203 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: W0529 21:52:53.422563 314 status_manager.go:461] Failed to get status for pod "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)": Get https://192.168.11.210:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-master: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:53 master kubelet[314]: W0529 21:52:53.428211 314 pod_container_deletor.go:77] Container "4be8beb9cbaa481bb5a5ec7b07ca989a6d8872313af164289634fac1f010e176" not found in pod's containers
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.428496 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.437202 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: W0529 21:52:53.443849 314 pod_container_deletor.go:77] Container "a686579b4bed1a33cfeb0ed1054ad327417a83c832631c09717b75def4ab182c" not found in pod's containers
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.444246 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.450991 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:53 master kubelet[314]: W0529 21:52:53.458119 314 pod_container_deletor.go:77] Container "54101bd0cf6766dc89ceca6b4607330b9b50b21076faaea0ccb4201c6a36ea7f" not found in pod's containers
May 29 21:52:53 master kubelet[314]: W0529 21:52:53.458220 314 pod_container_deletor.go:77] Container "353dda02712e1a6273d9e46c7645a9f06733119511f6e80bb3e3d87e99fca4b4" not found in pod's containers
May 29 21:52:53 master kubelet[314]: W0529 21:52:53.458275 314 pod_container_deletor.go:77] Container "5b9f09df69dcfe0989f910016fcf87965a805685c903b16444e0d9f6d7a17d7c" not found in pod's containers
May 29 21:52:53 master kubelet[314]: I0529 21:52:53.480899 314 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/75b1b9ab0f3a37601ee6e0a6c14cc1a7-kubeconfig") pod "kube-scheduler-master" (UID: "75b1b9ab0f3a37601ee6e0a6c14cc1a7")
May 29 21:52:53 master kubelet[314]: E0529 21:52:53.914201 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:53 master kubelet[314]: E0529 21:52:53.915949 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:53 master kubelet[314]: E0529 21:52:53.918168 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:54 master kubelet[314]: E0529 21:52:54.915534 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:54 master kubelet[314]: E0529 21:52:54.917501 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:54 master kubelet[314]: E0529 21:52:54.919558 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:55 master kubelet[314]: I0529 21:52:55.796155 314 kuberuntime_manager.go:757] checking backoff for container "kube-scheduler" in pod "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)"
May 29 21:52:55 master kubelet[314]: E0529 21:52:55.918997 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:55 master kubelet[314]: E0529 21:52:55.919159 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:55 master kubelet[314]: E0529 21:52:55.921600 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:56 master kubelet[314]: I0529 21:52:56.019484 314 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)"
May 29 21:52:56 master kubelet[314]: I0529 21:52:56.047266 314 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-master_kube-system(e8bbd63fe665e75bedba1f9bfcf885b6)"
May 29 21:52:56 master kubelet[314]: I0529 21:52:56.050502 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:56 master kubelet[314]: W0529 21:52:56.071232 314 pod_container_deletor.go:77] Container "bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a" not found in pod's containers
May 29 21:52:56 master kubelet[314]: I0529 21:52:56.071295 314 kuberuntime_manager.go:757] checking backoff for container "kube-controller-manager" in pod "kube-controller-manager-master_kube-system(deff6225ab733c0edb8f1f173d199971)"
May 29 21:52:56 master kubelet[314]: I0529 21:52:56.525740 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:56 master kubelet[314]: I0529 21:52:56.534191 314 kubelet_node_status.go:82] Attempting to register node master
May 29 21:52:56 master kubelet[314]: E0529 21:52:56.535225 314 kubelet_node_status.go:106] Unable to register node "master" with API server: Post https://192.168.11.210:6443/api/v1/nodes: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:56 master kubelet[314]: E0529 21:52:56.922475 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:56 master kubelet[314]: E0529 21:52:56.922510 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:56 master kubelet[314]: E0529 21:52:56.923906 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:57 master kubelet[314]: W0529 21:52:57.704548 314 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 21:52:57 master kubelet[314]: E0529 21:52:57.706195 314 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 29 21:52:57 master kubelet[314]: E0529 21:52:57.924584 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:57 master kubelet[314]: E0529 21:52:57.926621 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:57 master kubelet[314]: E0529 21:52:57.928056 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:58 master kubelet[314]: I0529 21:52:58.744493 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:58 master kubelet[314]: W0529 21:52:58.757315 314 pod_container_deletor.go:77] Container "cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7" not found in pod's containers
May 29 21:52:58 master kubelet[314]: E0529 21:52:58.926128 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:58 master kubelet[314]: E0529 21:52:58.928240 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:58 master kubelet[314]: E0529 21:52:58.929843 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:59 master kubelet[314]: I0529 21:52:59.378002 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:59 master kubelet[314]: W0529 21:52:59.402320 314 pod_container_deletor.go:77] Container "a6ea1fcb22a2e4720d2f5f2874e00d2befc02468ff79f241ead87ab020cfda03" not found in pod's containers
May 29 21:52:59 master kubelet[314]: E0529 21:52:59.927311 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:59 master kubelet[314]: E0529 21:52:59.930459 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:59 master kubelet[314]: E0529 21:52:59.931764 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:52:59 master kubelet[314]: I0529 21:52:59.950760 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:52:59 master kubelet[314]: W0529 21:52:59.962868 314 pod_container_deletor.go:77] Container "e7905cc7dc8e56d8a5ec1aeab9643bb67fdaf5ad73d9d63f533b91104206783e" not found in pod's containers
May 29 21:53:00 master kubelet[314]: E0529 21:53:00.930961 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:00 master kubelet[314]: E0529 21:53:00.932606 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:00 master kubelet[314]: E0529 21:53:00.934833 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:01 master kubelet[314]: I0529 21:53:01.074561 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:53:01 master kubelet[314]: W0529 21:53:01.088232 314 status_manager.go:461] Failed to get status for pod "kube-controller-manager-master_kube-system(deff6225ab733c0edb8f1f173d199971)": Get https://192.168.11.210:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-master: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:01 master kubelet[314]: I0529 21:53:01.121173 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:53:01 master kubelet[314]: W0529 21:53:01.131446 314 status_manager.go:461] Failed to get status for pod "kube-apiserver-master_kube-system(e8bbd63fe665e75bedba1f9bfcf885b6)": Get https://192.168.11.210:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-master: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:01 master kubelet[314]: I0529 21:53:01.182305 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:53:01 master kubelet[314]: W0529 21:53:01.196185 314 status_manager.go:461] Failed to get status for pod "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)": Get https://192.168.11.210:6443/api/v1/namespaces/kube-system/pods/etcd-master: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:01 master kubelet[314]: I0529 21:53:01.228036 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:53:01 master kubelet[314]: W0529 21:53:01.237762 314 status_manager.go:461] Failed to get status for pod "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)": Get https://192.168.11.210:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-master: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:01 master kubelet[314]: E0529 21:53:01.932370 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:01 master kubelet[314]: E0529 21:53:01.934367 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:01 master kubelet[314]: E0529 21:53:01.941138 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:02 master kubelet[314]: I0529 21:53:02.293649 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:53:02 master kubelet[314]: I0529 21:53:02.363821 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:53:02 master kubelet[314]: E0529 21:53:02.702356 314 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node "master" not found
May 29 21:53:02 master kubelet[314]: W0529 21:53:02.710964 314 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 21:53:02 master kubelet[314]: E0529 21:53:02.711864 314 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 29 21:53:02 master kubelet[314]: E0529 21:53:02.934416 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:02 master kubelet[314]: I0529 21:53:02.935619 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 21:53:02 master kubelet[314]: E0529 21:53:02.936531 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:02 master kubelet[314]: E0529 21:53:02.943961 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:02 master kubelet[314]: I0529 21:53:02.949862 314 kubelet_node_status.go:82] Attempting to register node master
May 29 21:53:02 master kubelet[314]: E0529 21:53:02.951803 314 kubelet_node_status.go:106] Unable to register node "master" with API server: Post https://192.168.11.210:6443/api/v1/nodes: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:03 master kubelet[314]: E0529 21:53:03.359250 314 event.go:209] Unable to write event: 'Post https://192.168.11.210:6443/api/v1/namespaces/default/events: dial tcp 192.168.11.210:6443: getsockopt: connection refused' (may retry after sleeping)
May 29 21:53:03 master kubelet[314]: E0529 21:53:03.936454 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:03 master kubelet[314]: E0529 21:53:03.938626 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:03 master kubelet[314]: E0529 21:53:03.945997 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:04 master kubelet[314]: E0529 21:53:04.938562 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:04 master kubelet[314]: E0529 21:53:04.940582 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:04 master kubelet[314]: E0529 21:53:04.947975 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 21:53:05 master kubelet[314]: E0529 21:53:05.239902 314 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "etcd-master": Error response from daemon: Conflict. The container name "/k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1" is already in use by container "bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a". You have to remove (or rename) that container to be able to reuse that name.
May 29 21:53:05 master kubelet[314]: E0529 21:53:05.240442 314 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "etcd-master": Error response from daemon: Conflict. The container name "/k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1" is already in use by container "bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a". You have to remove (or rename) that container to be able to reuse that name.
May 29 21:53:05 master kubelet[314]: E0529 21:53:05.240575 314 kuberuntime_manager.go:646] createPodSandbox for pod "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "etcd-master": Error response from daemon: Conflict. The container name "/k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1" is already in use by container "bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a". You have to remove (or rename) that container to be able to reuse that name.
May 29 21:53:05 master kubelet[314]: E0529 21:53:05.241812 314 pod_workers.go:186] Error syncing pod d8b5a9db912cd817b3117cd5313ef6d6 ("etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)"), skipping: failed to "CreatePodSandbox" for "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)" with CreatePodSandboxError: "CreatePodSandbox for pod \"etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)\" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"etcd-master\": Error response from daemon: Conflict. The container name \"/k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1\" is already in use by container \"bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a\". You have to remove (or rename) that container to be able to reuse that name."
May 29 22:00:20 master kubelet[314]: E0529 22:00:20.269969 314 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-scheduler-master": Error response from daemon: Conflict. The container name "/k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1" is already in use by container "cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:20 master kubelet[314]: E0529 22:00:20.272148 314 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-scheduler-master": Error response from daemon: Conflict. The container name "/k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1" is already in use by container "cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:20 master kubelet[314]: E0529 22:00:20.273541 314 kuberuntime_manager.go:646] createPodSandbox for pod "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-scheduler-master": Error response from daemon: Conflict. The container name "/k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1" is already in use by container "cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:20 master kubelet[314]: E0529 22:00:20.275118 314 pod_workers.go:186] Error syncing pod 75b1b9ab0f3a37601ee6e0a6c14cc1a7 ("kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)"), skipping: failed to "CreatePodSandbox" for "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)\" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-scheduler-master\": Error response from daemon: Conflict. The container name \"/k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1\" is already in use by container \"cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7\". You have to remove (or rename) that container to be able to reuse that name."
May 29 22:00:20 master kubelet[314]: E0529 22:00:20.283942 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 22:00:20 master kubelet[314]: E0529 22:00:20.288435 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 22:00:20 master kubelet[314]: E0529 22:00:20.292945 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 22:00:21 master kubelet[314]: E0529 22:00:21.288706 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.11.210:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 22:00:21 master kubelet[314]: E0529 22:00:21.306623 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.11.210:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 22:00:21 master kubelet[314]: E0529 22:00:21.306623 314 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.11.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&limit=500&resourceVersion=0: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 22:00:21 master kubelet[314]: I0529 22:00:21.375664 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 22:00:21 master kubelet[314]: I0529 22:00:21.375664 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 22:00:21 master kubelet[314]: W0529 22:00:21.388665 314 status_manager.go:461] Failed to get status for pod "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)": Get https://192.168.11.210:6443/api/v1/namespaces/kube-system/pods/etcd-master: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 22:00:21 master kubelet[314]: W0529 22:00:21.388726 314 pod_container_deletor.go:77] Container "bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a" not found in pod's containers
May 29 22:00:21 master kubelet[314]: I0529 22:00:21.409319 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 22:00:21 master kubelet[314]: I0529 22:00:21.409319 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 22:00:21 master kubelet[314]: W0529 22:00:21.416578 314 pod_container_deletor.go:77] Container "cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7" not found in pod's containers
May 29 22:00:21 master kubelet[314]: W0529 22:00:21.421159 314 status_manager.go:461] Failed to get status for pod "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)": Get https://192.168.11.210:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-master: dial tcp 192.168.11.210:6443: getsockopt: connection refused
May 29 22:00:21 master kubelet[314]: E0529 22:00:21.715159 314 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "etcd-master": Error response from daemon: Conflict. The container name "/k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1" is already in use by container "bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:21 master kubelet[314]: E0529 22:00:21.715286 314 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "etcd-master": Error response from daemon: Conflict. The container name "/k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1" is already in use by container "bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:21 master kubelet[314]: E0529 22:00:21.715341 314 kuberuntime_manager.go:646] createPodSandbox for pod "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "etcd-master": Error response from daemon: Conflict. The container name "/k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1" is already in use by container "bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:21 master kubelet[314]: E0529 22:00:21.715563 314 pod_workers.go:186] Error syncing pod d8b5a9db912cd817b3117cd5313ef6d6 ("etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)"), skipping: failed to "CreatePodSandbox" for "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)" with CreatePodSandboxError: "CreatePodSandbox for pod \"etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)\" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"etcd-master\": Error response from daemon: Conflict. The container name \"/k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1\" is already in use by container \"bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a\". You have to remove (or rename) that container to be able to reuse that name."
May 29 22:00:21 master kubelet[314]: E0529 22:00:21.747421 314 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-scheduler-master": Error response from daemon: Conflict. The container name "/k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1" is already in use by container "cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:21 master kubelet[314]: E0529 22:00:21.747543 314 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-scheduler-master": Error response from daemon: Conflict. The container name "/k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1" is already in use by container "cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:21 master kubelet[314]: E0529 22:00:21.747596 314 kuberuntime_manager.go:646] createPodSandbox for pod "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-scheduler-master": Error response from daemon: Conflict. The container name "/k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1" is already in use by container "cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:21 master kubelet[314]: E0529 22:00:21.747834 314 pod_workers.go:186] Error syncing pod 75b1b9ab0f3a37601ee6e0a6c14cc1a7 ("kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)"), skipping: failed to "CreatePodSandbox" for "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)\" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-scheduler-master\": Error response from daemon: Conflict. The container name \"/k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1\" is already in use by container \"cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7\". You have to remove (or rename) that container to be able to reuse that name."
May 29 22:00:22 master kubelet[314]: W0529 22:00:22.060399 314 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 22:00:22 master kubelet[314]: E0529 22:00:22.063912 314 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 29 22:00:22 master kubelet[314]: I0529 22:00:22.436455 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 22:00:22 master kubelet[314]: I0529 22:00:22.437595 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 22:00:22 master kubelet[314]: E0529 22:00:22.779136 314 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-scheduler-master": Error response from daemon: Conflict. The container name "/k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1" is already in use by container "cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:22 master kubelet[314]: E0529 22:00:22.779345 314 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-scheduler-master": Error response from daemon: Conflict. The container name "/k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1" is already in use by container "cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:22 master kubelet[314]: E0529 22:00:22.779403 314 kuberuntime_manager.go:646] createPodSandbox for pod "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-scheduler-master": Error response from daemon: Conflict. The container name "/k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1" is already in use by container "cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:22 master kubelet[314]: E0529 22:00:22.779629 314 pod_workers.go:186] Error syncing pod 75b1b9ab0f3a37601ee6e0a6c14cc1a7 ("kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)"), skipping: failed to "CreatePodSandbox" for "kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-scheduler-master_kube-system(75b1b9ab0f3a37601ee6e0a6c14cc1a7)\" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-scheduler-master\": Error response from daemon: Conflict. The container name \"/k8s_POD_kube-scheduler-master_kube-system_75b1b9ab0f3a37601ee6e0a6c14cc1a7_1\" is already in use by container \"cfef92f8e951c1edfbba7f20ad73f66bbfc4655e7005bcc6e070c5e290b26ec7\". You have to remove (or rename) that container to be able to reuse that name."
May 29 22:00:22 master kubelet[314]: E0529 22:00:22.791784 314 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "etcd-master": Error response from daemon: Conflict. The container name "/k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1" is already in use by container "bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:22 master kubelet[314]: E0529 22:00:22.791903 314 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "etcd-master": Error response from daemon: Conflict. The container name "/k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1" is already in use by container "bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:22 master kubelet[314]: E0529 22:00:22.791973 314 kuberuntime_manager.go:646] createPodSandbox for pod "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "etcd-master": Error response from daemon: Conflict. The container name "/k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1" is already in use by container "bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a". You have to remove (or rename) that container to be able to reuse that name.
May 29 22:00:22 master kubelet[314]: E0529 22:00:22.792215 314 pod_workers.go:186] Error syncing pod d8b5a9db912cd817b3117cd5313ef6d6 ("etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)"), skipping: failed to "CreatePodSandbox" for "etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)" with CreatePodSandboxError: "CreatePodSandbox for pod \"etcd-master_kube-system(d8b5a9db912cd817b3117cd5313ef6d6)\" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"etcd-master\": Error response from daemon: Conflict. The container name \"/k8s_POD_etcd-master_kube-system_d8b5a9db912cd817b3117cd5313ef6d6_1\" is already in use by container \"bb2771c2486816f96585166e2732c2f9f2f28102c8967a02f814f41fb22da35a\". You have to remove (or rename) that container to be able to reuse that name."
May 29 22:00:24 master kubelet[314]: I0529 22:00:24.295379 314 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
May 29 22:00:24 master kubelet[314]: I0529 22:00:24.303127 314 kubelet_node_status.go:82] Attempting to register node master
After some time:
May 29 22:14:10 master kubelet[314]: E0529 22:14:10.806165 314 remote_image.go:83] ImageStatus "weaveworks/weave-npc:2.3.0" from image service failed: rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2: invalid argument
May 29 22:14:10 master kubelet[314]: E0529 22:14:10.806601 314 kuberuntime_image.go:87] ImageStatus for image {"weaveworks/weave-npc:2.3.0"} failed: rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2: invalid argument
May 29 22:14:10 master kubelet[314]: E0529 22:14:10.807760 314 kuberuntime_manager.go:733] container start failed: ImageInspectError: Failed to inspect image "weaveworks/weave-npc:2.3.0": rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2: invalid argument
May 29 22:14:10 master kubelet[314]: E0529 22:14:10.808048 314 pod_workers.go:186] Error syncing pod 808bf2e8-637a-11e8-be1d-b827ebcfd0f3 ("weave-net-9x7x8_kube-system(808bf2e8-637a-11e8-be1d-b827ebcfd0f3)"), skipping: [failed to "StartContainer" for "weave" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=weave pod=weave-net-9x7x8_kube-system(808bf2e8-637a-11e8-be1d-b827ebcfd0f3)"
May 29 22:14:10 master kubelet[314]: , failed to "StartContainer" for "weave-npc" with ImageInspectError: "Failed to inspect image \"weaveworks/weave-npc:2.3.0\": rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2: invalid argument"
May 29 22:14:10 master kubelet[314]: ]
May 29 22:14:13 master kubelet[314]: W0529 22:14:13.119014 314 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 22:14:13 master kubelet[314]: E0529 22:14:13.120848 314 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 29 22:14:18 master kubelet[314]: W0529 22:14:18.126583 314 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 22:14:18 master kubelet[314]: E0529 22:14:18.127643 314 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 29 22:14:23 master kubelet[314]: W0529 22:14:23.130455 314 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
May 29 22:14:23 master kubelet[314]: E0529 22:14:23.131914 314 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
May 29 22:14:24 master kubelet[314]: I0529 22:14:24.804966 314 kuberuntime_manager.go:513] Container {Name:weave Image:weaveworks/weave-kube:2.3.0 Command:[/home/weave/launch.sh] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI}]} VolumeMounts:[{Name:weavedb ReadOnly:false MountPath:/weavedb SubPath: MountPropagation:<nil>} {Name:cni-bin ReadOnly:false MountPath:/host/opt SubPath: MountPropagation:<nil>} {Name:cni-bin2 ReadOnly:false MountPath:/host/home SubPath: MountPropagation:<nil>} {Name:cni-conf ReadOnly:false MountPath:/host/etc SubPath: MountPropagation:<nil>} {Name:dbus ReadOnly:false MountPath:/host/var/lib/dbus SubPath: MountPropagation:<nil>} {Name:lib-modules ReadOnly:false MountPath:/lib/modules SubPath: MountPropagation:<nil>} {Name:xtables-lock ReadOnly:false MountPath:/run/xtables.lock SubPath: MountPropagation:<nil>} {Name:weave-net-token-ld64k ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/status,Port:6784,Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 29 22:14:24 master kubelet[314]: I0529 22:14:24.805424 314 kuberuntime_manager.go:513] Container {Name:weave-npc Image:weaveworks/weave-npc:2.3.0 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:HOSTNAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[cpu:{i:{value:10 scale:-3} d:{Dec:<nil>} s:10m Format:DecimalSI}]} VolumeMounts:[{Name:xtables-lock ReadOnly:false MountPath:/run/xtables.lock SubPath: MountPropagation:<nil>} {Name:weave-net-token-ld64k ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 29 22:14:24 master kubelet[314]: I0529 22:14:24.806271 314 kuberuntime_manager.go:757] checking backoff for container "weave" in pod "weave-net-9x7x8_kube-system(808bf2e8-637a-11e8-be1d-b827ebcfd0f3)"
May 29 22:14:24 master kubelet[314]: I0529 22:14:24.807823 314 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=weave pod=weave-net-9x7x8_kube-system(808bf2e8-637a-11e8-be1d-b827ebcfd0f3)
May 29 22:14:24 master kubelet[314]: E0529 22:14:24.819189 314 remote_image.go:83] ImageStatus "weaveworks/weave-npc:2.3.0" from image service failed: rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2: invalid argument
May 29 22:14:24 master kubelet[314]: E0529 22:14:24.819486 314 kuberuntime_image.go:87] ImageStatus for image {"weaveworks/weave-npc:2.3.0"} failed: rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2: invalid argument
May 29 22:14:24 master kubelet[314]: E0529 22:14:24.820136 314 kuberuntime_manager.go:733] container start failed: ImageInspectError: Failed to inspect image "weaveworks/weave-npc:2.3.0": rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2: invalid argument
I hope someone can find a cause in the logging.
I rebooted the.nodes after "kubeadm reset* and after I removed the docker and kubernetes packages.
Op di 29 mei 2018 22:40 schreef Josh Reichardt notifications@github.com:
Probably not the kernel then. It looks like Weave is having problems? Are you rebooting the nodes after your kubeadm reset?
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/rak8s/rak8s/issues/33#issuecomment-392936012, or mute the thread https://github.com/notifications/unsubscribe-auth/AJoVDzvVWurtxdljZ7Nf1DJrPle2sPr5ks5t3bIygaJpZM4UQdPY .
That's a bummer, I haven't run into that error yet and I don't really have any other ideas off the top of my head, unless maybe Weave is the problem? Might try swapping it for Flannel.
There might be some good tidbits in this issue https://github.com/geerlingguy/raspberry-pi-dramble/issues/100#issuecomment-391393819
I like to test with a combination of versions that is known to be running successful. So if anyone has a rak8s cluster that is still running fine, I would like to know the versions of:
Please run the commands below on your Ansible host and post the output in this issue.
pi@ansible-node ~/git/rak8s (testing) $ ansible all -m shell -a 'sudo docker images'
node2 | SUCCESS | rc=0 >>
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy-arm v1.10.2 3fb95685d2d5 4 weeks ago 87.3MB
k8s.gcr.io/kube-proxy-arm v1.10.1 14751a827e8d 7 weeks ago 87.3MB
weaveworks/weave-npc 2.3.0 e214242c20cf 7 weeks ago 44.5MB
weaveworks/weave-kube 2.3.0 10ead2ac9c17 7 weeks ago 88.8MB
k8s.gcr.io/kube-proxy-arm v1.10.0 59acee744cc6 2 months ago 87.3MB
k8s.gcr.io/pause-arm 3.1 e11a8cbeda86 5 months ago 374kB
node1 | SUCCESS | rc=0 >>
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy-arm v1.10.2 3fb95685d2d5 4 weeks ago 87.3MB
k8s.gcr.io/kube-proxy-arm v1.10.1 14751a827e8d 7 weeks ago 87.3MB
weaveworks/weave-npc 2.3.0 e214242c20cf 7 weeks ago 44.5MB
weaveworks/weave-kube 2.3.0 10ead2ac9c17 7 weeks ago 88.8MB
k8s.gcr.io/kube-proxy-arm v1.10.0 59acee744cc6 2 months ago 87.3MB
k8s.gcr.io/pause-arm 3.1 e11a8cbeda86 5 months ago 374kB
master | SUCCESS | rc=0 >>
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy-arm v1.10.2 3fb95685d2d5 4 weeks ago 87.3MB
k8s.gcr.io/kube-scheduler-arm v1.10.2 816c40ff51c0 4 weeks ago 43.6MB
weaveworks/weave-npc 2.3.0 e214242c20cf 7 weeks ago 44.5MB
k8s.gcr.io/etcd-arm 3.1.12 88c32b5960ff 2 months ago 178MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-arm 1.14.8 18622f52ae14 4 months ago 37.5MB
k8s.gcr.io/pause-arm 3.1 e11a8cbeda86 5 months ago 374kB
pi@ansible-node ~/git/rak8s (testing) $ ansible all -m shell -a 'dpkg-query -l kube* docker-ce'
master | SUCCESS | rc=0 >>
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-=======================-============-====================================================
ii docker-ce 18.04.0~ce~3-0~raspbian armhf Docker: the open-source application container engine
ii kubeadm 1.10.2-00 armhf Kubernetes Cluster Bootstrapping Tool
ii kubectl 1.10.2-00 armhf Kubernetes Command Line Tool
ii kubelet 1.10.2-00 armhf Kubernetes Node Agent
ii kubernetes-cni 0.6.0-00 armhf Kubernetes CNI
node1 | SUCCESS | rc=0 >>
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-=======================-============-====================================================
ii docker-ce 18.04.0~ce~3-0~raspbian armhf Docker: the open-source application container engine
ii kubeadm 1.10.2-00 armhf Kubernetes Cluster Bootstrapping Tool
ii kubectl 1.10.2-00 armhf Kubernetes Command Line Tool
ii kubelet 1.10.2-00 armhf Kubernetes Node Agent
ii kubernetes-cni 0.6.0-00 armhf Kubernetes CNI
node2 | SUCCESS | rc=0 >>
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-=======================-============-====================================================
ii docker-ce 18.04.0~ce~3-0~raspbian armhf Docker: the open-source application container engine
ii kubeadm 1.10.2-00 armhf Kubernetes Cluster Bootstrapping Tool
ii kubectl 1.10.2-00 armhf Kubernetes Command Line Tool
ii kubelet 1.10.2-00 armhf Kubernetes Node Agent
ii kubernetes-cni 0.6.0-00 armhf Kubernetes CNI
pi@ansible-node ~/git/rak8s (testing) $ ansible all -m shell -a 'uname -a'
node1 | SUCCESS | rc=0 >>
Linux node1 4.14.34-v7+ #1110 SMP Mon Apr 16 15:18:51 BST 2018 armv7l GNU/Linux
master | SUCCESS | rc=0 >>
Linux master 4.14.34-v7+ #1110 SMP Mon Apr 16 15:18:51 BST 2018 armv7l GNU/Linux
node2 | SUCCESS | rc=0 >>
Linux node2 4.14.34-v7+ #1110 SMP Mon Apr 16 15:18:51 BST 2018 armv7l GNU/Linux
Thanks a lot!
@tedsluis
Master:
k8s.gcr.io/kube-controller-manager-arm v1.10.3 a71104d44337 11 days ago 129MB
k8s.gcr.io/kube-apiserver-arm v1.10.3 c02312021f68 11 days ago 206MB
k8s.gcr.io/kube-proxy-arm v1.10.3 b758647abd62 11 days ago 87.3MB
k8s.gcr.io/kube-scheduler-arm v1.10.3 5fb13ffe05ac 11 days ago 43.6MB
weaveworks/weave-npc 2.3.0 e214242c20cf 7 weeks ago 44.5MB
weaveworks/weave-kube 2.3.0 10ead2ac9c17 7 weeks ago 88.8MB
k8s.gcr.io/etcd-arm 3.1.12 88c32b5960ff 2 months ago 178MB
coredns/coredns 1.0.6 628dc9270a6f 3 months ago 29.6MB
k8s.gcr.io/k8s-dns-sidecar-arm 1.14.8 ca3b0c0df151 4 months ago 37.1MB
k8s.gcr.io/k8s-dns-kube-dns-arm 1.14.8 764a4d0d27e2 4 months ago 44.4MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-arm 1.14.8 18622f52ae14 4 months ago 37.5MB
k8s.gcr.io/pause-arm 3.1 e11a8cbeda86 5 months ago 374kB
ii docker-ce 18.04.0~ce~3-0~raspbian armhf Docker: the open-source application container engine
ii kubeadm 1.10.2-00 armhf Kubernetes Cluster Bootstrapping Tool
ii kubectl 1.10.2-00 armhf Kubernetes Command Line Tool
ii kubelet 1.10.2-00 armhf Kubernetes Node Agent
ii kubernetes-cni 0.6.0-00 armhf Kubernetes CNI
Linux kube-master 4.14.34-v7+ #1110 SMP Mon Apr 16 15:18:51 BST 2018 armv7l GNU/Linux
Workers:
jessestuart/tiller v2.9.1 054e4e91a30d 2 days ago 36.5MB
k8s.gcr.io/kube-proxy-arm v1.10.3 b758647abd62 11 days ago 87.3MB
metallb/speaker v0.6.2 377f5a0b7d1f 4 weeks ago 38.6MB
metallb/controller v0.6.2 4f5ec82738ef 4 weeks ago 37.6MB
ubuntu latest 1dfc5e34223d 4 weeks ago 72.7MB
weaveworks/weave-npc 2.3.0 982e879f62a9 7 weeks ago 49.4MB
weaveworks/weave-kube 2.3.0 75770647069a 7 weeks ago 97.8MB
k8s.gcr.io/kubernetes-dashboard-arm v1.8.3 beeb0f2940ae 3 months ago 98.2MB
k8s.gcr.io/pause-arm64
ii docker-ce 18.04.0~ce~3-0~ubuntu arm64 Docker: the open-source application container engine
ii kubeadm 1.10.2-00 arm64 Kubernetes Cluster Bootstrapping Tool
ii kubectl 1.10.2-00 arm64 Kubernetes Command Line Tool
ii kubelet 1.10.2-00 arm64 Kubernetes Node Agent
ii kubernetes-cni 0.6.0-00 arm64 Kubernetes CNI
Linux kube-node-X 4.4.77-rockchip-ayufan-136 #1 SMP Thu Oct 12 09:14:48 UTC 2017 aarch64 aarch64 aarch64 GNU/Linux
We are also working on building a basic support matrix over here if you want to help.
so i can confirm on Debian jessie it doesnt works
For the last few weeks a did a lot of testing on this. I was not able to get a kubernetes cluster version 1.10.1, 1.102 or 1.10.3 running on my 3 raspberry 3B+ nodes in combination with docker-ce 18-04 or 18-03. It always failed on TASK [master : Initialize Master]
with the kubelet error
as mentioned above.
Many others reported similar issues on Raspberry with Kubernetes 1.10.x at https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aissue+raspberry , so I decided to go back to a version that was reported to work okay.
When I tried Kubernetes 1.9.7
in combination with docker-ce 18-04
it successfully passed the TASK [master : Initialize Master]
, but then the script failed on the TASK [master : Install Weave (Networking)]
. I had also read that other Kubernetes projects on Raspberry had network issues and moved from weave
to flannel
, so I tried. That worked immediately.
I now have added versioning to my rak8s cluster. I am able to configure the versions of docker-ce, kubeadm, kubectl, kubelet, kubernetes images and flannel in the inventory file. It is useful for:
My Inventory:
master ansible_host=192.168.11.210
node1 ansible_host=192.168.11.211
node2 ansible_host=192.168.11.212
[master]
master
[all:vars]
kubernetes_package_version="1.9.7-00"
# Available versions:
# 1.10.3-00
# 1.10.2-00
# 1.10.1-00
# 1.10.0-00
# 1.9.8-00
# 1.9.7-00
# 1.9.6-00
# 1.9.5-00
kubernetes_version="v1.9.7"
# Available versions:
# v1.10.3
# v1.10.2
# v1.10.1
# v1.10.1
# v1.9.8
# v1.9.7
# v1.9.6
# v1.9.5
docker_ce_version="18.04.0~ce~3-0~raspbian"
# Available versions:
# 18.05.0~ce~3-0~raspbian
# 18.04.0~ce~3-0~raspbian
# 18.03.1~ce-0~raspbian
# 18.03.0~ce-0~raspbian
# 18.02.0~ce-0~raspbian
# 18.01.0~ce-0~raspbian
flannel_version="v0.10.0"
# v0.10.0
# v0.9.1
# v0.9.0
# v0.8.0
# v0.7.1
# v0.7.0
Furthermore I have created a cleanup.yml
playbook to remove docker-ce, docker images, kubeadm, kubelet and kubectl packages and pod logging. It also performs a reboot. This way I try to simulated a fresh install.
I will do a pull request, so everyone can enjoy a working cluster again.
@tedsluis Very nice.
Just a heads up on the TASK [master : Initialize Master]
issue on newer version of Kubernetes (1.10.3+). There seems to be an upstream issue with etcd crypto on ARM devices that is causing the newer versions to hang on install.
Unfortunately I have no idea how to fix it.
@jmreicha: yeah, I read about the etcd crypto issue on ARM. In one of my earlier post I requested for a combination of versions that still worked. You shared your working setup (thanks):
Quote jmreicha:
Master:
k8s.gcr.io/kube-controller-manager-arm v1.10.3 a71104d44337 11 days ago 129MB
k8s.gcr.io/kube-apiserver-arm v1.10.3 c02312021f68 11 days ago 206MB
k8s.gcr.io/kube-proxy-arm v1.10.3 b758647abd62 11 days ago 87.3MB
k8s.gcr.io/kube-scheduler-arm v1.10.3 5fb13ffe05ac 11 days ago 43.6MB
weaveworks/weave-npc 2.3.0 e214242c20cf 7 weeks ago 44.5MB
weaveworks/weave-kube 2.3.0 10ead2ac9c17 7 weeks ago 88.8MB
k8s.gcr.io/etcd-arm 3.1.12 88c32b5960ff 2 months ago 178MB
coredns/coredns 1.0.6 628dc9270a6f 3 months ago 29.6MB
k8s.gcr.io/k8s-dns-sidecar-arm 1.14.8 ca3b0c0df151 4 months ago 37.1MB
k8s.gcr.io/k8s-dns-kube-dns-arm 1.14.8 764a4d0d27e2 4 months ago 44.4MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-arm 1.14.8 18622f52ae14 4 months ago 37.5MB
k8s.gcr.io/pause-arm 3.1 e11a8cbeda86 5 months ago 374kB
ii docker-ce 18.04.0~ce~3-0~raspbian armhf Docker: the open-source application container engine
ii kubeadm 1.10.2-00 armhf Kubernetes Cluster Bootstrapping Tool
ii kubectl 1.10.2-00 armhf Kubernetes Command Line Tool
ii kubelet 1.10.2-00 armhf Kubernetes Node Agent
ii kubernetes-cni 0.6.0-00 armhf Kubernetes CNI
Linux kube-master 4.14.34-v7+ #1110 SMP Mon Apr 16 15:18:51 BST 2018 armv7l GNU/Linux
I was not able to get this combination up and running. Can you reproduce the installation from a fresh raspbian image and share your installation steps and logs, please? This would be very useful.
Error response from daemon: readlink /var/lib/docker/overlay2: invalid argument
This appears to be an error from Docker - it cannot navigate its own internal structures.
In similar situations I have been reduced to deleting the entire Docker directory and starting again.
@bboreham: Thanks for your remarks! I will keep that in mind.
@tedsluis I have taken a similar approach in pinning different versions. I basically forked the repo so that I could manager RPi and Rock64's and ended up changing a bunch of other stuff. It's not on Github yet but should be soon 😄
Just rebuilt the cluster this morning with Docker 18.04 and Kubernetes 1.10.2 and has been stable. The main difference in my setup is that the workers are Rock64 boards instead of RPi.
Master (RPi)
ii docker-ce 18.04.0~ce~3-0~raspbian armhf Docker: the open-source application container engine
ii kubeadm 1.10.2-00 armhf Kubernetes Cluster Bootstrapping Tool
ii kubectl 1.10.2-00 armhf Kubernetes Command Line Tool
ii kubelet 1.10.2-00 armhf Kubernetes Node Agent
ii kubernetes-cni 0.6.0-00 armhf Kubernetes CNI
---
k8s.gcr.io/kube-proxy-arm v1.10.2 3fb95685d2d5 6 weeks ago 87.3MB
k8s.gcr.io/kube-apiserver-arm v1.10.2 c68f5521f86b 6 weeks ago 206MB
k8s.gcr.io/kube-scheduler-arm v1.10.2 816c40ff51c0 6 weeks ago 43.6MB
k8s.gcr.io/kube-controller-manager-arm v1.10.2 f67c023adb1b 6 weeks ago 129MB
Workers (Rock64)
ii docker-ce 18.04.0~ce~3-0~ubuntu arm64 Docker: the open-source application container engine
ii kubeadm 1.10.2-00 arm64 Kubernetes Cluster Bootstrapping Tool
ii kubectl 1.10.2-00 arm64 Kubernetes Command Line Tool
ii kubelet 1.10.2-00 arm64 Kubernetes Node Agent
ii kubernetes-cni 0.6.0-00 arm64 Kubernetes CNI
The only thing I haven't been able to get working yet is Weave fastdp (haven't tried Flannel). Apparently fastdp needs a kernel module I don't have and I haven't been brave enough to try a different kernel version yet.
(A Weave Net makntainer writes) If you don’t have the vxlan module Weave Net should fall back to pcap. If you have some sort of error message or other interesting logs please file an issue in the Weave Net repo.
@bboreham Yep it is working well, just no vxlan yet.
@jmreicha: Thanks for your update. I am curious about the version of kubeadm at the time you deployed the cluster. Could that be 1.9.x? Can you redeploy your cluster on a fresh raspbian image with kubeadm 1.10.x and 1.9.7 and then test whether it is still running?
I am able to deploy a cluster with kubeadm and kubelet all on version 1.9.7 in combination with kubernetes images of version 1.101 and 1.10.2. Higher versions of kubeadm and kubelet will cause issues. Of course if I upgrade kubeadm to 1.10.x after the cluster deployment, the cluster will keep running.
@bboreham: Thanks for your note. I will keep that in mind when I return back to flannel.
I merged @tedsluis changes from #34 still leaves the TASK [master : Join Kubernetes Cluster]
hanging. Not sure what the fix was supposed to be at this point.
@chris-short For what it's worth, I merged in @tedsluis changes and I was able to stand up a cluster using kubeadm/kubelet version 1.9.7 and docker version 18.04. flannel version 0.10.0.
If the merged changes work I'm okay to close 😀
FYI for anyone thats interested, a kernel update has been released that should get vxlan working with RPi3B+ ARM-v7+. you can get it by doing an rpi-update
.
Details over at: https://github.com/raspberrypi/linux/issues/2580
OS running on Ansible host:
Ubuntu 16.04
Ansible Version (
ansible --version
):2.5.3
Uploaded logs showing errors(
rak8s/.log/ansible.log
)Raspberry Pi Hardware Version:
RPi 3B+
Raspberry Pi OS & Version (
cat /etc/os-release
):Raspbian GNU/Linux 9 (stretch)
Detailed description of the issue:
Receive the above logs on a fresh install on the master. I haven't played around with it yet but figured I would let you know.