kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.14k stars 4.86k forks source link

minikube start --driver=docker fails with multiple issues #14640

Closed jean-christophe-manciot closed 1 year ago

jean-christophe-manciot commented 2 years ago

What Happened?

Ubuntu 22.04
docker-ce 5:20.10.17~3-0~ubuntu-jammy
docker-ce-rootless-extras 5:20.10.17~3-0~ubuntu-jammy
minikube version: v1.26.0

With a fresh docker installation (no prior docker container exists), everything seems to work fine up to:

stderr: Error: No such network: minikube I0726 15:46:42.504226 1137474 network_create.go:277] output of [docker network inspect minikube]: -- stdout -- []

-- /stdout -- stderr Error: No such network: minikube

/stderr I0726 15:46:42.504287 1137474 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"

stderr: docker: Error response from daemon: Address already in use. W0726 15:46:43.590117 1137474 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0726 15:46:43.590151 1137474 oci.go:240] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0726 15:46:43.590207 1137474 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" / I0726 15:46:43.648718 1137474 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=8192mb -e container=docker --expose 8443 --volume=/home/actionmystique/src/Ansible/git-awx:/awx_devel --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 / I0726 15:46:44.048755 1137474 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0726 15:46:44.068643 1137474 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0726 15:46:44.088557 1137474 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables

Also:

$ docker ps --all
CONTAINER ID   IMAGE                                 COMMAND                  CREATED         STATUS         PORTS                                                                                                                                  NAMES
cbba2b27d6ea   gcr.io/k8s-minikube/kicbase:v0.0.32   "/usr/local/bin/entrโ€ฆ"   2 minutes ago   Up 2 minutes   127.0.0.1:49157->22/tcp, 127.0.0.1:49156->2376/tcp, 127.0.0.1:49155->5000/tcp, 127.0.0.1:49154->8443/tcp, 127.0.0.1:49153->32443/tcp   minikube

Attach the log file

โŒ Exiting due to DRV_CP_ENDPOINT: failed to lookup ip for "" ๐Ÿ’ก Suggestion:

Recreate the cluster by running:
minikube delete
minikube start

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ โ”‚ โ”‚ ๐Ÿ˜ฟ If the above advice does not help, please let us know: โ”‚ โ”‚ ๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new/choose โ”‚ โ”‚ โ”‚ โ”‚ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. โ”‚ โ”‚ Please also attach the following file to the GitHub issue: โ”‚ โ”‚ - /tmp/minikube_logs_2e02dc5c5c2e1474337841f988877822b051a88f_0.log โ”‚ โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

Operating System

Ubuntu

Driver

Docker

jean-christophe-manciot commented 2 years ago

After:

Same exact issues when not mounting anything (except the one linked to the mount):

stderr: Error: No such network: minikube I0726 16:53:57.016423 507627 network_create.go:277] output of [docker network inspect minikube]: -- stdout -- []

-- /stdout -- stderr Error: No such network: minikube

/stderr I0726 16:53:57.016488 507627 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0726 16:53:57.034074 507627 network_create.go:84] failed to get mtu information from the docker's default network "bridge": parse subnet for bridge: invalid CIDR address: 172.18.0.0/16fe80::/64 I0726 16:53:57.034821 507627 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000738500] misses:0} I0726 16:53:57.035064 507627 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0726 16:53:57.035081 507627 network_create.go:115] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0726 16:53:57.035135 507627 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube

stderr: docker: Error response from daemon: Address already in use. W0726 16:53:58.285169 507627 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0726 16:53:58.285196 507627 oci.go:240] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0726 16:53:58.285243 507627 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0726 16:53:58.335031 507627 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=8192mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95

RA489 commented 2 years ago

/kind support

spowelljr commented 2 years ago

Hi @jean-christophe-manciot, I see that you removed ~/.minikube, I'm assuming your ran rm -rf? If you want to delete the minikube home directory you should run the command minikube delete --all --purge as that will make sure clusters are removed, otherwise if you manually delete the directory things will be hanging which could be potentially causing the address is use error.

I'd recommend you run minikube delete --all --purge and then run docker system prune -a --volumes -f to prune any hanging volumes, then try starting minikube again.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/minikube/issues/14640#issuecomment-1407480221): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.