kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.23k stars 4.87k forks source link

Minikube v1.32.0 not starting on Ubtuntu 22.04 #17784

Closed simeonjackman closed 4 months ago

simeonjackman commented 9 months ago

What Happened?

Trying to run minikube on Ubuntu 22.04 with docker (version 24.0.6, build ed223bc). I get the following output:

$ minikube start
๐Ÿ˜„  minikube v1.32.0 on Ubuntu 22.04 (amd64)
โœจ  Automatically selected the docker driver. Other choices: none, ssh
๐Ÿ“Œ  Using Docker driver with root privileges
๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿšœ  Pulling base image ...
๐Ÿ’พ  Downloading Kubernetes v1.28.3 preload ...
    > preloaded-images-k8s-v18-v1...:  403.35 MiB / 403.35 MiB  100.00% 41.24 M
๐Ÿ”ฅ  Creating docker container (CPUs=2, Memory=16000MB) ...
โœ‹  Stopping node "minikube"  ...
๐Ÿ”ฅ  Deleting "minikube" in docker ...
๐Ÿคฆ  StartHost failed, but will try again: creating host: create: provisioning: get ssh host-port: unable to inspect a not running container to get SSH port
๐Ÿ”ฅ  Creating docker container (CPUs=2, Memory=16000MB) ...
๐Ÿ˜ฟ  Failed to start docker container. Running "minikube delete" may fix it: creating host: create: provisioning: get ssh host-port: unable to inspect a not running container to get SSH port

โŒ  Exiting due to GUEST_PROVISION_CONTAINER_EXITED: Docker container exited prematurely after it was created, consider investigating Docker's performance/health.

Looking at the logs of the minikube pod i get the following output:

+ fix_cgroup
+ [[ -f /sys/fs/cgroup/cgroup.controllers ]]
+ echo 'INFO: detected cgroup v1'
INFO: detected cgroup v1
+ local current_cgroup
++ grep -E '^[^:]*:([^:]*,)?cpu(,[^,:]*)?:.*' /proc/self/cgroup
++ cut -d: -f3
+ current_cgroup=
+ '[' '' = / ']'
+ echo 'WARN: cgroupns not enabled! Please use cgroup v2, or cgroup v1 with cgroupns enabled.'
WARN: cgroupns not enabled! Please use cgroup v2, or cgroup v1 with cgroupns enabled.
+ echo 'INFO: fix cgroup mounts for all subsystems'
INFO: fix cgroup mounts for all subsystems
+ local cgroup_subsystems
++ findmnt -lun -o source,target -t cgroup
++ grep -F ''
++ awk '{print $2}'
+ cgroup_subsystems=
+ local unsupported_cgroups
++ findmnt -lun -o source,target -t cgroup
++ grep_allow_nomatch -v -F ''
++ grep -v -F ''
++ awk '{print $2}'
++ [[ 1 == 1 ]]
+ unsupported_cgroups=
+ '[' -n '' ']'
+ local cgroup_mounts
++ grep -E -o '/[[:alnum:]].* /sys/fs/cgroup.*.*cgroup' /proc/self/mountinfo
++ true
+ cgroup_mounts=
+ [[ -n '' ]]
+ mount --make-rprivate /sys/fs/cgroup
mount: /sys/fs/cgroup: not mount point or bad option.
+ echo ''
+ IFS=
+ read -r subsystem
+ mount_kubelet_cgroup_root /kubelet ''
+ local cgroup_root=/kubelet
+ local subsystem=
+ '[' -z /kubelet ']'
+ mkdir -p //kubelet
+ '[' '' == /sys/fs/cgroup/cpuset ']'
+ mount --bind //kubelet //kubelet
+ mount_kubelet_cgroup_root /kubelet.slice ''
+ local cgroup_root=/kubelet.slice
+ local subsystem=
+ '[' -z /kubelet.slice ']'
+ mkdir -p //kubelet.slice
+ '[' '' == /sys/fs/cgroup/cpuset ']'
+ mount --bind //kubelet.slice //kubelet.slice
+ IFS=
+ read -r subsystem
+ [[ ! '' = */sys/fs/cgroup/systemd* ]]
+ mount_kubelet_cgroup_root /kubelet.slice /sys/fs/cgroup/systemd
+ local cgroup_root=/kubelet.slice
+ local subsystem=/sys/fs/cgroup/systemd
+ '[' -z /kubelet.slice ']'
+ mkdir -p /sys/fs/cgroup/systemd//kubelet.slice
mkdir: cannot create directory '/sys/fs/cgroup/systemd': No such file or directory
+ '[' /sys/fs/cgroup/systemd == /sys/fs/cgroup/cpuset ']'
+ mount --bind /sys/fs/cgroup/systemd//kubelet.slice /sys/fs/cgroup/systemd//kubelet.slice
mount: /sys/fs/cgroup/systemd//kubelet.slice: mount point does not exist.
+ echo 'fix_cgroup failed with exit code 32 (retry 7)'
fix_cgroup failed with exit code 32 (retry 7)
+ echo 'fix_cgroup diagnostics information below:'
fix_cgroup diagnostics information below:
+ mount
overlay on / type overlay (rw,relatime,lowerdir=/workspace/.docker-root/overlay2/l/RE2OAZLP3TBYI542NMKCYRO6AQ:/workspace/.docker-root/overlay2/l/5YKHXK5CWJA4NAHYV7QJ4KRCUL,upperdir=/workspace/.docker-root/overlay2/605b12f08eb3146c2d50a4f2e5da6d85882b8d8d6ea92abde383081fdb235185/diff,workdir=/workspace/.docker-root/overlay2/605b12f08eb3146c2d50a4f2e5da6d85882b8d8d6ea92abde383081fdb235185/work,userxattr)

Attach the log file

Operating System

Ubuntu

Driver

Docker

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 4 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/minikube/issues/17784#issuecomment-2105939297): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.