Closed itsecforu closed 4 years ago
the same issue, have you fix this problem? @itsecforu
@majulong Hey! what Docker version do u use?
default are the kubespray given
docker version Client: Docker Engine - Community Version: 19.03.2 API version: 1.39 (downgraded from 1.40) Go version: go1.12.8 Git commit: 6a30dfc Built: Thu Aug 29 05:28:55 2019 OS/Arch: linux/amd64 Experimental: false
Server: Docker Engine - Community Engine: Version: 18.09.7 API version: 1.39 (minimum version 1.12) Go version: go1.10.8 Git commit: 2d0083d Built: Thu Jun 27 17:26:28 2019 OS/Arch: linux/amd64 Experimental: false
I manually installed docker 18.06 (docker-ce-18.06.2.ce) as I saw here:
https://kubernetes.io/docs/setup/production-environment/container-runtimes/
Did it resolve the issue when use docker 18.06?
Did it resolve the issue when use docker 18.06?
Aga
hello every body help me.I want runnig kuebernetes dashboard, but show this error :: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3b35756d8ca381a6392a1134bd866ea7e668c85139166076248ccb03502dde21" network for pod "dashboard-metrics-scraper-76585494d8-7z5r8": networkPlugin cni failed to set up pod "dashboard-metrics-scraper-76585494d8-7z5r8_kubernetes-dashboard" network: stat /etc/kubernetes/ssl/kubecfg-kube-node.yaml: no such file or directory
I may have found a fix for this specific issue. Seems to some issues with how /etc/systemd/system/docker.service.d/docker-options.conf is being created. It should be setting --exec-opt native.cgroupdriver=systemd
in there for any system that uses systemd as its init system. This is why the kubernetes join fails because docker is trying to use cgroups instead of systemd and it doesn't approve of that configuration.
I'm going to try to submit a PR soon to show how I was able to get past this error. I've tested in Ubuntu 18.04. I just need to read the docs on submitting PRs for the first time as i've not worked with this repo before.
I am seeing failures and this message as well
[WARNING IsDockerSystemdCheck]: detected \“cgroupfs\” as the Docker cgroup driver. The recommended driver is \“systemd\“.
I was able to make the message go away by changing two params. And this did not require a PR @ServerNinja, I looked at your PR, do you think these two variables cover the changes you are looking for?
In the inventory.ini;
[all:vars]
kubelet_cgroup_driver="systemd"
In group_vars/all/docker.yml; Notice the space at the end of the line
docker_options: >-
--exec-opt native.cgroupdriver=systemd
Still I am having some other networking issue even after the change
Feb 28 03:48:44 master1 kubelet[22331]: W0228 03:48:44.051330 22331 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
Feb 28 03:48:44 master1 kubelet[22331]: E0228 03:48:44.115156 22331 kubelet.go:2267] node "master-dummy" not found
Feb 28 03:48:44 master1 kubelet[22331]: E0228 03:48:44.198037 22331 eviction_manager.go:246] eviction manager: failed to get summary stats: failed to get node info: node "master-dummy" not found
Versions
Kubespray tag: v2.12.1 Kubernetes: v1.15.7 Ubuntu: 18.04.4 LTS cloud_provider: aws Client: Docker Engine - Community Version: 19.03.6 API version: 1.39 (downgraded from 1.40) Go version: go1.12.16 Git commit: 369ce74a3c Built: Thu Feb 13 01:27:49 2020 OS/Arch: linux/amd64 Experimental: false
Server: Docker Engine - Community Engine: Version: 18.09.7 API version: 1.39 (minimum version 1.12) Go version: go1.10.8 Git commit: 2d0083d Built: Thu Jun 27 17:23:02 2019 OS/Arch: linux/amd64 Experimental: false
I tried the instructions above and it didn't work for me. Still getting the same error
Edit: I'm dumb, I didn't turn the firewall off on my nodes so this happened. After turning the firewall off/adding the correct firewall allow rules it worked :+1:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Environment:
Copy of your inventory file:
Anything else do we need to know: