Open SteveBisnett opened 3 years ago
@SteveBisnett do u mind sharing the output of
minikube logs
alternatively I am curious if this flag helps you? minikube start --force-systemd
is this running inside a VM or inside another container ?
if this is running inside a container, one option would be using the none driver
the original error comes from kubeadm init
[ERROR SystemVerification]: could not unmarshal the JSON output of 'docker info':
: unexpected end of JSON input
another thing to try would be trying the containerd runtime
would this help ? minikube delete --all minikube start --container-runtime=containerd
the original error comes from
kubeadm init
[ERROR SystemVerification]: could not unmarshal the JSON output of 'docker info': : unexpected end of JSON input
another thing to try would be trying the containerd runtime
would this help ? minikube delete --all minikube start --container-runtime=containerd
So I have attempted to start in '--driver=none' since this is a VM and I get the same results. It is as though Docker is not running, despite being able to get a status and running "Hello World".
Here is the output of the --container-runtime=containerd command
[root@control-plane ~]# minikube start --container-runtime=containerd
X Exiting due to PROVIDER_DOCKER_NOT_RUNNING: expected version string format is "-". but got
X Exiting due to PROVIDER_DOCKER_NOT_RUNNING: expected version string format is "-". but got
Can you post the output of docker version --format "{{.Server.Os}}-{{.Server.Version}}"
?
X Exiting due to PROVIDER_DOCKER_NOT_RUNNING: expected version string format is "-". but got
Can you post the output of
docker version
?
[root@control-plane ~]# sudo docker version Client: Docker Engine - Community Version: 19.03.15 API version: 1.40 Go version: go1.13.15 Git commit: 99e3ed8919 Built: Sat Jan 30 03:16:44 2021 OS/Arch: linux/amd64 Experimental: false
Server: Docker Engine - Community Engine: Version: 19.03.15 API version: 1.40 (minimum version 1.12) Go version: go1.13.15 Git commit: 99e3ed8919 Built: Sat Jan 30 03:15:19 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.4.4 GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e runc: Version: 1.0.0-rc93 GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec docker-init: Version: 0.18.0 GitCommit: fec3683
Without the sudo.
Something like:
$ docker version --format "{{.Server.Os}}-{{.Server.Version}}"
linux-20.10.6
Without the sudo.
I can't. Despite following the instructions on "Manage Docker as a non-root user" found here (https://docs.docker.com/engine/install/linux-postinstall/) it will only respond when I use SUDO.
minikube is supposed to be able to detect the docker error, so for some reason we get an "OK" error code - but no output ?
Possible we need to look out for "" results from docker version
and docker info
, but I don't think that has been seen before
Here is the output of 'docker info'... Of course with SUDO:
[root@control-plane ~]# sudo docker info Client: Debug Mode: false
Server: Containers: 9 Running: 0 Paused: 0 Stopped: 9 Images: 8 Server Version: 19.03.15 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 4.18.0-240.22.1.el8_3.x86_64 Operating System: CentOS Linux 8 OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 15.46GiB Name: control-plane.minikube.internal ID: EW3X:QRSM:A5XC:2HFJ:CNQP:2H3K:2TE4:7CJL:XUZJ:E37A:3LMN:35TR Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false
WARNING: API is accessible on http://127.0.0.1:2375 without encryption. Access to the remote API is equivalent to root access on the host. Refer to the 'Docker daemon attack surface' section in the documentation for more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
Here is the output of 'docker info'... Of course with SUDO:
We don't use sudo for docker, only for running podman...
It is kinda arbitrary, and some people prefer using "sudo docker" over adding their user to a root-equivalent group.
But it is a common setup: https://docs.docker.com/engine/install/linux-postinstall/ (sudo usermod -aG docker $USER
)
What is the output and exit code of running docker without ?
Anyway, can't reproduce this.
Here is what I get, after downgrading Docker from 20.10 to 19.03:
[admin@localhost ~]$ more /etc/redhat-release
CentOS Linux release 8.3.2011
[admin@localhost ~]$ docker version
Client: Docker Engine - Community
Version: 19.03.15
API version: 1.40
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:16:44 2021
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.15
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:15:19 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.3.9
GitCommit: ea765aba0d05254012b0b9e595e995c09186427f
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
https://docs.docker.com/engine/install/centos/
yum install docker-ce-19.03.15 docker-ce-cli-19.03.15 containerd.io-1.3.9
Here is the expected output, from a non-admin (unprivileged) user:
[luser@localhost ~]$ docker version
Client: Docker Engine - Community
Version: 19.03.15
API version: 1.40
Go version: go1.13.15
Git commit: 99e3ed8919
Built: Sat Jan 30 03:16:44 2021
OS/Arch: linux/amd64
Experimental: false
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version: dial unix /var/run/docker.sock: connect: permission denied
[luser@localhost ~]$ echo $?
1
Running docker requires* user to have admin/docker/root privileges.
* except for rootless, which isn't yet supported in minikube
Here is the output of 'docker info'... Of course with SUDO:
We don't use sudo for docker, only for running podman...
It is kinda arbitrary, and some people prefer using "sudo docker" over adding their user to a root-equivalent group.
But it is a common setup: https://docs.docker.com/engine/install/linux-postinstall/ (
sudo usermod -aG docker $USER
)What is the output and exit code of running docker without ?
So, I already executed that command, but when running 'docker info' without sudo it shows this:
[root@control-plane ~]# sudo usermod -aG docker $USER [root@control-plane ~]# docker info [root@control-plane ~]#
So you get these issues inside, when you run the commands with minikube ssh
on the node ?
And not outside on the host, as part of the verification before running the minikube start
command
As you are running as root (and not "docker" $USER) here, it should not be about permissions.
Still trying to duplicate. Why is it running as "root", and where did the "control-plane" host come from ?
I get these when accessing the console directly and logging in as root.
Based upon your last posts, I reinstalled Docker and after rebooting the system, I used the sudo -i and attempted to start minikube with the following command: minikube start --driver=none. This time I received a different response, but the cluster still did not start up....
[root@control-plane ~]# minikube start --driver=none
Preparing Kubernetes v1.20.2 on Docker 19.03.15 ...
Booting up control plane ... ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1 stdout: [init] Using Kubernetes version: v1.20.2 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [control-plane.minikube.internal localhost] and IPs [172.30.228.212 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [control-plane.minikube.internal localhost] and IPs [172.30.228.212 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by:
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in docker:
stderr: [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileExisting-socat]: socat not found in system path error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
#########################################################
Minikube attempted 3 times to access the kublet, but never was successful. It errored out with the following:
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher
The none driver is very different from the docker driver.
For instance, you need to remember to disable SELinux and Firewalld.
https://minikube.sigs.k8s.io/docs/drivers/none/
It also doesn't see much testing in CI on Fedora or CentOS, #3552
FirewallD is offline and disabled.
This is running in a VM and was recommended to run it using the --none driver. Starting with Docker, I am still getting the same errors as before.
This is running in a VM and was recommended to run it using the --none driver.
Sure, either should work. Just can be a bit hard to follow when mixing drivers... I still don't know what configuration would lead to docker outputting "empty" ?
But this part is a bit strange, makes you wonder what else was modified:
WARNING: API is accessible on http://127.0.0.1:2375 without encryption.
If I enable SELinux again (setenforce 1
), then I get the same kind of timeout.
This is why it is a suspect. Enabling firewalld did get a proper warning message.
But at least I could reproduce the bug where the none driver sets the hostname...
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Steps to reproduce the issue:
Minikube version: 1.18.1 (need to use this version as AWX has a bug related to 1.19) Docker version: 19.03.15, build 99e3ed8919
Full output of failed command: [ansible@control-plane ~]$ minikube start
stderr: [WARNING IsDockerSystemdCheck]: detected "" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileExisting-socat]: socat not found in system path error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR SystemVerification]: could not unmarshal the JSON output of 'docker info':
: unexpected end of JSON input [preflight] If you know what you are doing, you can make a check non-fatal with
--ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higherI have verified that docker is running:
[ansible@control-plane ~]$ sudo docker version Client: Docker Engine - Community Version: 19.03.15 API version: 1.40 Go version: go1.13.15 Git commit: 99e3ed8919 Built: Sat Jan 30 03:16:44 2021 OS/Arch: linux/amd64 Experimental: false
Server: Docker Engine - Community Engine: Version: 19.03.15 API version: 1.40 (minimum version 1.12) Go version: go1.13.15 Git commit: 99e3ed8919 Built: Sat Jan 30 03:15:19 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.4.4 GitCommit: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e runc: Version: 1.0.0-rc93 GitCommit: 12644e614e25b05da6fd08a38ffa0cfe1903fdec docker-init: Version: 0.18.0 GitCommit: fec3683 [ansible@control-plane ~]$ sudo docker info Client: Debug Mode: false
Server: Containers: 8 Running: 0 Paused: 0 Stopped: 8 Images: 8 Server Version: 19.03.15 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 4.18.0-240.22.1.el8_3.x86_64 Operating System: CentOS Linux 8 OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 15.46GiB Name: control-plane.minikube.internal ID: EW3X:QRSM:A5XC:2HFJ:CNQP:2H3K:2TE4:7CJL:XUZJ:E37A:3LMN:35TR Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false
[ansible@control-plane ~]$ minikube version minikube version: v1.18.1 commit: 09ee84d530de4a92f00f1c5dbc34cead092b95bc