kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.48k stars 4.89k forks source link

minikube addons enable ingress失败 #12293

Closed dctwan closed 3 years ago

dctwan commented 3 years ago

重现问题所需的命令:minikube addons enable ingress

失败的命令的完整输出

▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1 ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0 ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1 🔎 Verifying ingress addon...

❌ Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]

╭─────────────────────────────────────────────────────────────────────────────╮ │ │ │ 😿 If the above advice does not help, please let us know: │ │ 👉 https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ Please attach the following file to the GitHub issue: │ │ - /tmp/minikube_addons_bf1031aaca0ed6f7b83a08d9bf0cd116bd9110a7_0.log │ │ │ ╰─────────────────────────────────────────────────────────────────────────────╯

minikube logs命令的输出

Log file created at: 2021/08/17 11:04:44 Running on machine: dctwan-VirtualBox Binary: Built with gc go1.16.4 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0817 11:04:44.590039 48432 out.go:286] Setting OutFile to fd 1 ... I0817 11:04:44.590160 48432 out.go:338] isatty.IsTerminal(1) = true I0817 11:04:44.590163 48432 out.go:299] Setting ErrFile to fd 2... I0817 11:04:44.590166 48432 out.go:338] isatty.IsTerminal(2) = true I0817 11:04:44.590260 48432 root.go:312] Updating PATH: /home/dctwan/.minikube/bin W0817 11:04:44.590344 48432 root.go:291] Error reading config file at /home/dctwan/.minikube/config/config.json: open /home/dctwan/.minikube/config/config.json: no such file or directory I0817 11:04:44.590429 48432 mustload.go:65] Loading cluster: minikube I0817 11:04:44.591272 48432 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0817 11:04:44.629627 48432 host.go:66] Checking if "minikube" exists ... I0817 11:04:44.629925 48432 api_server.go:164] Checking apiserver status ... I0817 11:04:44.629957 48432 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube. I0817 11:04:44.630020 48432 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0817 11:04:44.673025 48432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/dctwan/.minikube/machines/minikube/id_rsa Username:docker} I0817 11:04:44.780361 48432 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2108/cgroup I0817 11:04:44.792953 48432 api_server.go:180] apiserver freezer: "5:freezer:/docker/0a572bde9770b8ccc3b3a67d14b83228934120e85a884e4e8fe3c0877da6a930/kubepods/burstable/podcefbe66f503bf010430ec3521cf31be8/e4a0725e082e1649f34519af62d30864fb15dd2d2db90c06cfb718ac2a413fb0"

使用的操作系统版本: ubuntu20.04

andriyDev commented 3 years ago

Could you provide the full set of logs using minikube logs -f file.txt? We have a main issue here https://github.com/kubernetes/minikube/issues/10544. The current workaround is minikube delete --all --purge and restart, but I've seen several situations where this hasn't worked.

Bytesu commented 3 years ago

Could you provide the full set of logs using minikube logs -f file.txt? We have a main issue here #10544. The current workaround is minikube delete --all --purge and restart, but I've seen several situations where this hasn't worked.

=> Audit <== --------- ------ ---------- ------ --------- ------------------------------- ------------------------------- Command Args Profile User Version Start Time End Time
start minikube k8s v1.22.0 Sat, 21 Aug 2021 14:42:52 CST Sat, 21 Aug 2021 14:48:44 CST
--------- ------ ---------- ------ --------- ------------------------------- -------------------------------

==> Last Start <== Log file created at: 2021/08/21 14:42:52 Running on machine: iZ8vb1cgnytsf2bnokinqcZ Binary: Built with gc go1.16.4 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0821 14:42:52.714001 3693658 out.go:286] Setting OutFile to fd 1 ... I0821 14:42:52.714102 3693658 out.go:338] isatty.IsTerminal(1) = true I0821 14:42:52.714105 3693658 out.go:299] Setting ErrFile to fd 2... I0821 14:42:52.714110 3693658 out.go:338] isatty.IsTerminal(2) = true I0821 14:42:52.714252 3693658 root.go:312] Updating PATH: /home/k8s/.minikube/bin W0821 14:42:52.714384 3693658 root.go:291] Error reading config file at /home/k8s/.minikube/config/config.json: open /home/k8s/.minikube/config/config.json: no such file or directory I0821 14:42:52.714604 3693658 out.go:293] Setting JSON to false I0821 14:42:52.721184 3693658 start.go:111] hostinfo: {"hostname":"iZ8vb1cgnytsf2bnokinqcZ","uptime":1376177,"bootTime":1628151995,"procs":137,"os":"linux","platform":"centos","platformFamily":"rhel","platformVersion":"7.6.1810","kernelVersion":"3.10.0-957.5.1.el7.x86_64","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"guest","hostId":"20190215-1721-0859-0907-433256076310"} I0821 14:42:52.721283 3693658 start.go:121] virtualization: guest I0821 14:42:52.723904 3693658 out.go:165] 😄 minikube v1.22.0 on Centos 7.6.1810 (amd64) I0821 14:42:52.724092 3693658 notify.go:169] Checking for updates... I0821 14:42:52.724139 3693658 driver.go:335] Setting default libvirt URI to qemu:///system I0821 14:42:52.724164 3693658 global.go:111] Querying for installed drivers using PATH=/home/k8s/.minikube/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin I0821 14:42:52.724191 3693658 global.go:119] podman default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/} I0821 14:42:52.724229 3693658 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0821 14:42:52.724271 3693658 global.go:119] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/} I0821 14:42:52.724293 3693658 global.go:119] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/} I0821 14:42:52.776248 3693658 docker.go:132] docker version: linux-20.10.8 I0821 14:42:52.776432 3693658 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0821 14:42:52.889808 3693658 info.go:263] docker info: {ID:4WMR:UP4H:5ASK:OCLJ:XVR5:2O55:DG4Z:QK4S:TS32:VSBH:KQBS:QIDJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-21 14:42:52.81152754 +0800 CST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-957.5.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8201084928 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:iZ8vb1cgnytsf2bnokinqcZ Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}} I0821 14:42:52.890000 3693658 docker.go:244] overlay module found I0821 14:42:52.890012 3693658 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0821 14:42:52.890064 3693658 global.go:119] kvm2 default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Reason: Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/} I0821 14:42:52.898106 3693658 global.go:119] none default: false priority: 4, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:running the 'none' driver as a regular user requires sudo permissions Reason: Fix: Doc:} I0821 14:42:52.898154 3693658 driver.go:270] not recommending "ssh" due to default: false I0821 14:42:52.898173 3693658 driver.go:305] Picked: docker I0821 14:42:52.898179 3693658 driver.go:306] Alternatives: [ssh] I0821 14:42:52.898183 3693658 driver.go:307] Rejects: [podman virtualbox vmware kvm2 none] I0821 14:42:52.900160 3693658 out.go:165] ✨ Automatically selected the docker driver I0821 14:42:52.900195 3693658 start.go:278] selected driver: docker I0821 14:42:52.900203 3693658 start.go:751] validating driver "docker" against I0821 14:42:52.900220 3693658 start.go:762] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0821 14:42:52.900344 3693658 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0821 14:42:53.010071 3693658 info.go:263] docker info: {ID:4WMR:UP4H:5ASK:OCLJ:XVR5:2O55:DG4Z:QK4S:TS32:VSBH:KQBS:QIDJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2021-08-21 14:42:52.936143098 +0800 CST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-957.5.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8201084928 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:iZ8vb1cgnytsf2bnokinqcZ Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b63 Expected:v1.0.1-0-g4144b63} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings: ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.8.0]] Warnings:}} I0821 14:42:53.010195 3693658 start_flags.go:261] no existing cluster config was found, will generate one from the flags I0821 14:42:53.011062 3693658 start_flags.go:342] Using suggested 2200MB memory alloc based on sys=7821MB, container=7821MB I0821 14:42:53.011205 3693658 start_flags.go:669] Wait components to verify : map[apiserver:true system_pods:true] I0821 14:42:53.011224 3693658 cni.go:93] Creating CNI manager for "" I0821 14:42:53.011235 3693658 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0821 14:42:53.011266 3693658 start_flags.go:275] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0821 14:42:53.013368 3693658 out.go:165] 👍 Starting control plane node minikube in cluster minikube I0821 14:42:53.013419 3693658 cache.go:117] Beginning downloading kic base image for docker with docker I0821 14:42:53.014713 3693658 out.go:165] 🚜 Pulling base image ... I0821 14:42:53.014765 3693658 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime docker I0821 14:42:53.014865 3693658 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon I0821 14:42:53.053662 3693658 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 to local cache I0821 14:42:53.053890 3693658 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local cache directory I0821 14:42:53.054009 3693658 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 to local cache I0821 14:42:53.902545 3693658 preload.go:120] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4 I0821 14:42:53.902618 3693658 cache.go:56] Caching tarball of preloaded images I0821 14:42:53.902817 3693658 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime docker I0821 14:42:53.905216 3693658 out.go:165] 💾 Downloading Kubernetes v1.21.2 preload ... I0821 14:42:53.905251 3693658 preload.go:238] getting checksum for preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4 ... I0821 14:42:54.761356 3693658 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4?checksum=md5:343dc44f3ff6208ce0a6fd9c912bdb3d -> /home/k8s/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4 I0821 14:45:27.272496 3693658 cache.go:159] Loading gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 from local cache I0821 14:45:27.272579 3693658 cache.go:169] failed to load gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79, will try remote image if available: tarball: open /home/k8s/.minikube/cache/kic/kicbase_v0.0.25@sha256_6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79.tar: no such file or directory I0821 14:45:27.272612 3693658 cache.go:171] Downloading gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 to local daemon I0821 14:45:27.272748 3693658 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon I0821 14:45:27.316026 3693658 image.go:238] Writing gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 to local daemon I0821 14:46:36.004715 3693658 preload.go:248] saving checksum for preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4 ... I0821 14:46:36.004788 3693658 preload.go:255] verifying checksumm of /home/k8s/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4 ... I0821 14:46:37.629946 3693658 cache.go:59] Finished verifying existence of preloaded tar for v1.21.2 on docker I0821 14:46:37.630292 3693658 profile.go:148] Saving config to /home/k8s/.minikube/profiles/minikube/config.json ... I0821 14:46:37.630316 3693658 lock.go:36] WriteFile acquiring /home/k8s/.minikube/profiles/minikube/config.json: {Name:mkd421f13d2ed922ed665a184af4f9f3cf4403c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:01.398118 3693658 cache.go:179] failed to download gcr.io/k8s-minikube/kicbase:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79, will try fallback image if available: getting remote image: Get "https://gcr.io/v2/": dial tcp 142.250.157.82:443: i/o timeout I0821 14:48:01.398138 3693658 image.go:75] Checking for kicbase/stable:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon I0821 14:48:01.440351 3693658 image.go:79] Found kicbase/stable:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 in local docker daemon, skipping pull I0821 14:48:01.440376 3693658 cache.go:139] kicbase/stable:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 exists in daemon, skipping load W0821 14:48:01.440416 3693658 out.go:230] ❗ minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.25, but successfully downloaded kicbase/stable:v0.0.25 as a fallback image I0821 14:48:01.440449 3693658 cache.go:205] Successfully downloaded all kic artifacts I0821 14:48:01.440509 3693658 start.go:313] acquiring machines lock for minikube: {Name:mk807cdad4b68a9a3dfcf988ede65b66724de54a Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0821 14:48:01.440623 3693658 start.go:317] acquired machines lock for "minikube" in 92.148µs I0821 14:48:01.440657 3693658 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:kicbase/stable:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true} I0821 14:48:01.440755 3693658 start.go:126] createHost starting for "" (driver="docker") I0821 14:48:01.444398 3693658 out.go:192] 🔥 Creating docker container (CPUs=2, Memory=2200MB) ... I0821 14:48:01.444649 3693658 start.go:160] libmachine.API.Create for "minikube" (driver="docker") I0821 14:48:01.444685 3693658 client.go:168] LocalClient.Create starting I0821 14:48:01.444771 3693658 main.go:130] libmachine: Creating CA: /home/k8s/.minikube/certs/ca.pem I0821 14:48:01.769148 3693658 main.go:130] libmachine: Creating client certificate: /home/k8s/.minikube/certs/cert.pem I0821 14:48:01.841518 3693658 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0821 14:48:01.882024 3693658 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0821 14:48:01.882093 3693658 network_create.go:255] running [docker network inspect minikube] to gather additional debugging logs... I0821 14:48:01.882112 3693658 cli_runner.go:115] Run: docker network inspect minikube W0821 14:48:01.919991 3693658 cli_runner.go:162] docker network inspect minikube returned with exit code 1 I0821 14:48:01.920023 3693658 network_create.go:258] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: []

stderr: Error: No such network: minikube I0821 14:48:01.920036 3693658 network_create.go:260] output of [docker network inspect minikube]: -- stdout -- []

-- /stdout -- stderr Error: No such network: minikube

/stderr I0821 14:48:01.920089 3693658 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0821 14:48:01.957568 3693658 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0001c22c8] misses:0} I0821 14:48:01.957605 3693658 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0821 14:48:01.957625 3693658 network_create.go:106] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0821 14:48:01.957671 3693658 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube I0821 14:48:02.034689 3693658 network_create.go:90] docker network minikube 192.168.49.0/24 created I0821 14:48:02.034719 3693658 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container I0821 14:48:02.034789 3693658 cli_runner.go:115] Run: docker ps -a --format {{.Names}} I0821 14:48:02.073358 3693658 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0821 14:48:02.111105 3693658 oci.go:102] Successfully created a docker volume minikube I0821 14:48:02.111170 3693658 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var kicbase/stable:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -d /var/lib I0821 14:48:02.633510 3693658 oci.go:106] Successfully prepared a docker volume minikube W0821 14:48:02.633570 3693658 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0821 14:48:02.633584 3693658 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0821 14:48:02.633652 3693658 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'" I0821 14:48:02.633776 3693658 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime docker I0821 14:48:02.633799 3693658 kic.go:179] Starting extracting preloaded images to volume ... I0821 14:48:02.633850 3693658 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/k8s/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir kicbase/stable:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir I0821 14:48:02.784613 3693658 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 kicbase/stable:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 I0821 14:48:03.308138 3693658 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}} I0821 14:48:03.368103 3693658 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0821 14:48:03.415844 3693658 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0821 14:48:03.525892 3693658 oci.go:278] the created container "minikube" has a running status. I0821 14:48:03.525917 3693658 kic.go:210] Creating ssh key for kic: /home/k8s/.minikube/machines/minikube/id_rsa... I0821 14:48:03.859062 3693658 kic_runner.go:188] docker (temp): /home/k8s/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0821 14:48:04.066187 3693658 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0821 14:48:04.120101 3693658 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0821 14:48:04.120118 3693658 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0821 14:48:09.953763 3693658 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/k8s/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir kicbase/stable:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 -I lz4 -xf /preloaded.tar -C /extractDir: (7.319871432s) I0821 14:48:09.953790 3693658 kic.go:188] duration metric: took 7.319993 seconds to extract preloaded images to volume I0821 14:48:09.953861 3693658 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0821 14:48:10.005386 3693658 machine.go:88] provisioning docker machine ... I0821 14:48:10.005437 3693658 ubuntu.go:169] provisioning hostname "minikube" I0821 14:48:10.005503 3693658 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0821 14:48:10.045254 3693658 main.go:130] libmachine: Using SSH client type: native I0821 14:48:10.045451 3693658 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x8022e0] 0x8022a0 [] 0s} 127.0.0.1 49242 } I0821 14:48:10.045461 3693658 main.go:130] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0821 14:48:10.173425 3693658 main.go:130] libmachine: SSH cmd err, output: : minikube

I0821 14:48:10.173491 3693658 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0821 14:48:10.214897 3693658 main.go:130] libmachine: Using SSH client type: native I0821 14:48:10.215162 3693658 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x8022e0] 0x8022a0 [] 0s} 127.0.0.1 49242 } I0821 14:48:10.215182 3693658 main.go:130] libmachine: About to run SSH command:

            if ! grep -xq '.*\sminikube' /etc/hosts; then
                    if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                            sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                    else 
                            echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
                    fi
            fi

I0821 14:48:10.332798 3693658 main.go:130] libmachine: SSH cmd err, output: : I0821 14:48:10.332824 3693658 ubuntu.go:175] set auth options {CertDir:/home/k8s/.minikube CaCertPath:/home/k8s/.minikube/certs/ca.pem CaPrivateKeyPath:/home/k8s/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/k8s/.minikube/machines/server.pem ServerKeyPath:/home/k8s/.minikube/machines/server-key.pem ClientKeyPath:/home/k8s/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/k8s/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/k8s/.minikube} I0821 14:48:10.332842 3693658 ubuntu.go:177] setting up certificates I0821 14:48:10.332854 3693658 provision.go:83] configureAuth start I0821 14:48:10.332905 3693658 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0821 14:48:10.373224 3693658 provision.go:137] copyHostCerts I0821 14:48:10.373287 3693658 exec_runner.go:152] cp: /home/k8s/.minikube/certs/ca.pem --> /home/k8s/.minikube/ca.pem (1070 bytes) I0821 14:48:10.373385 3693658 exec_runner.go:152] cp: /home/k8s/.minikube/certs/cert.pem --> /home/k8s/.minikube/cert.pem (1111 bytes) I0821 14:48:10.373438 3693658 exec_runner.go:152] cp: /home/k8s/.minikube/certs/key.pem --> /home/k8s/.minikube/key.pem (1675 bytes) I0821 14:48:10.373475 3693658 provision.go:111] generating server cert: /home/k8s/.minikube/machines/server.pem ca-key=/home/k8s/.minikube/certs/ca.pem private-key=/home/k8s/.minikube/certs/ca-key.pem org=k8s.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0821 14:48:10.639148 3693658 provision.go:171] copyRemoteCerts I0821 14:48:10.639202 3693658 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0821 14:48:10.639257 3693658 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0821 14:48:10.679975 3693658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/k8s/.minikube/machines/minikube/id_rsa Username:docker} I0821 14:48:10.765504 3693658 ssh_runner.go:316] scp /home/k8s/.minikube/machines/server.pem --> /etc/docker/server.pem (1192 bytes) I0821 14:48:10.784817 3693658 ssh_runner.go:316] scp /home/k8s/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0821 14:48:10.803500 3693658 ssh_runner.go:316] scp /home/k8s/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes) I0821 14:48:10.822056 3693658 provision.go:86] duration metric: configureAuth took 489.184788ms I0821 14:48:10.822073 3693658 ubuntu.go:193] setting minikube options for container-runtime I0821 14:48:10.822292 3693658 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0821 14:48:10.862421 3693658 main.go:130] libmachine: Using SSH client type: native I0821 14:48:10.862617 3693658 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x8022e0] 0x8022a0 [] 0s} 127.0.0.1 49242 } I0821 14:48:10.862626 3693658 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0821 14:48:10.979535 3693658 main.go:130] libmachine: SSH cmd err, output: : overlay

I0821 14:48:10.979554 3693658 ubuntu.go:71] root file system type: overlay I0821 14:48:10.979741 3693658 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ... I0821 14:48:10.979798 3693658 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0821 14:48:11.021169 3693658 main.go:130] libmachine: Using SSH client type: native I0821 14:48:11.021341 3693658 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x8022e0] 0x8022a0 [] 0s} 127.0.0.1 49242 } I0821 14:48:11.021428 3693658 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60

[Service] Type=notify Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0821 14:48:11.147588 3693658 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60

[Service] Type=notify Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install] WantedBy=multi-user.target

I0821 14:48:11.147655 3693658 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0821 14:48:11.188262 3693658 main.go:130] libmachine: Using SSH client type: native I0821 14:48:11.188448 3693658 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x8022e0] 0x8022a0 [] 0s} 127.0.0.1 49242 } I0821 14:48:11.188463 3693658 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0821 14:48:12.028069 3693658 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-06-02 11:54:50.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2021-08-21 06:48:11.145337816 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60

[Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always

-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

@@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process -OOMScoreAdjust=-500

[Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker

I0821 14:48:12.028112 3693658 machine.go:91] provisioned docker machine in 2.022710435s I0821 14:48:12.028125 3693658 client.go:171] LocalClient.Create took 10.583434327s I0821 14:48:12.028138 3693658 start.go:168] duration metric: libmachine.API.Create for "minikube" took 10.583488049s I0821 14:48:12.028154 3693658 start.go:267] post-start starting for "minikube" (driver="docker") I0821 14:48:12.028161 3693658 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0821 14:48:12.028229 3693658 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0821 14:48:12.028286 3693658 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0821 14:48:12.070372 3693658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/k8s/.minikube/machines/minikube/id_rsa Username:docker} I0821 14:48:12.155629 3693658 ssh_runner.go:149] Run: cat /etc/os-release I0821 14:48:12.158859 3693658 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0821 14:48:12.158873 3693658 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0821 14:48:12.158882 3693658 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0821 14:48:12.158905 3693658 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0821 14:48:12.158918 3693658 filesync.go:126] Scanning /home/k8s/.minikube/addons for local assets ... I0821 14:48:12.158986 3693658 filesync.go:126] Scanning /home/k8s/.minikube/files for local assets ... I0821 14:48:12.159008 3693658 start.go:270] post-start completed in 130.847812ms I0821 14:48:12.159282 3693658 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0821 14:48:12.199153 3693658 profile.go:148] Saving config to /home/k8s/.minikube/profiles/minikube/config.json ... I0821 14:48:12.199395 3693658 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0821 14:48:12.199434 3693658 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0821 14:48:12.241619 3693658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/k8s/.minikube/machines/minikube/id_rsa Username:docker} I0821 14:48:12.323106 3693658 start.go:129] duration metric: createHost completed in 10.882332931s I0821 14:48:12.323123 3693658 start.go:80] releasing machines lock for "minikube", held for 10.882493029s I0821 14:48:12.323205 3693658 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0821 14:48:12.363515 3693658 ssh_runner.go:149] Run: systemctl --version I0821 14:48:12.363539 3693658 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0821 14:48:12.363565 3693658 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0821 14:48:12.363594 3693658 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0821 14:48:12.407945 3693658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/k8s/.minikube/machines/minikube/id_rsa Username:docker} I0821 14:48:12.408282 3693658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/k8s/.minikube/machines/minikube/id_rsa Username:docker} I0821 14:48:14.501768 3693658 ssh_runner.go:189] Completed: systemctl --version: (2.138226419s) I0821 14:48:14.501841 3693658 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.138286827s) I0821 14:48:14.501853 3693658 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd W0821 14:48:14.501869 3693658 start.go:655] [curl -sS -m 2 https://k8s.gcr.io/] failed: curl -sS -m 2 https://k8s.gcr.io/: Process exited with status 28 stdout:

stderr: curl: (28) Connection timed out after 2000 milliseconds W0821 14:48:14.502083 3693658 out.go:230] ❗ This container is having trouble accessing https://k8s.gcr.io W0821 14:48:14.502159 3693658 out.go:230] 💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0821 14:48:14.513538 3693658 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0821 14:48:14.524209 3693658 cruntime.go:249] skipping containerd shutdown because we are bound to it I0821 14:48:14.524257 3693658 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0821 14:48:14.534107 3693658 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0821 14:48:14.551899 3693658 ssh_runner.go:149] Run: sudo systemctl unmask docker.service I0821 14:48:14.626612 3693658 ssh_runner.go:149] Run: sudo systemctl enable docker.socket I0821 14:48:14.698380 3693658 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0821 14:48:14.708483 3693658 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0821 14:48:14.776286 3693658 ssh_runner.go:149] Run: sudo systemctl start docker I0821 14:48:14.786443 3693658 ssh_runner.go:149] Run: docker version --format {{.Server.Version}} I0821 14:48:14.846170 3693658 ssh_runner.go:149] Run: docker version --format {{.Server.Version}} I0821 14:48:14.906887 3693658 out.go:192] 🐳 Preparing Kubernetes v1.21.2 on Docker 20.10.7 ... I0821 14:48:14.906982 3693658 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0821 14:48:14.945761 3693658 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0821 14:48:14.949681 3693658 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0821 14:48:14.960138 3693658 preload.go:134] Checking if preload exists for k8s version v1.21.2 and runtime docker I0821 14:48:14.960183 3693658 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0821 14:48:15.010825 3693658 docker.go:535] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.21.2 k8s.gcr.io/kube-scheduler:v1.21.2 k8s.gcr.io/kube-proxy:v1.21.2 k8s.gcr.io/kube-controller-manager:v1.21.2 gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/pause:3.4.1 kubernetesui/dashboard:v2.1.0 k8s.gcr.io/coredns/coredns:v1.8.0 k8s.gcr.io/etcd:3.4.13-0 kubernetesui/metrics-scraper:v1.0.4

-- /stdout -- I0821 14:48:15.010843 3693658 docker.go:466] Images already preloaded, skipping extraction I0821 14:48:15.010889 3693658 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0821 14:48:15.060991 3693658 docker.go:535] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.21.2 k8s.gcr.io/kube-scheduler:v1.21.2 k8s.gcr.io/kube-proxy:v1.21.2 k8s.gcr.io/kube-controller-manager:v1.21.2 gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/pause:3.4.1 kubernetesui/dashboard:v2.1.0 k8s.gcr.io/coredns/coredns:v1.8.0 k8s.gcr.io/etcd:3.4.13-0 kubernetesui/metrics-scraper:v1.0.4

-- /stdout -- I0821 14:48:15.061022 3693658 cache_images.go:74] Images are preloaded, skipping loading I0821 14:48:15.061076 3693658 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}} I0821 14:48:15.165551 3693658 cni.go:93] Creating CNI manager for "" I0821 14:48:15.165565 3693658 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0821 14:48:15.165575 3693658 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0821 14:48:15.165591 3693658 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.21.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0821 14:48:15.165738 3693658 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens:

I0821 14:48:15.165827 3693658 kubeadm.go:909] kubelet [Unit] Wants=docker.socket

[Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.21.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install] config: {KubernetesVersion:v1.21.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0821 14:48:15.165880 3693658 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.2 I0821 14:48:15.174204 3693658 binaries.go:44] Found k8s binaries, skipping transfer I0821 14:48:15.174261 3693658 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0821 14:48:15.182364 3693658 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I0821 14:48:15.196199 3693658 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0821 14:48:15.209784 3693658 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1867 bytes) I0821 14:48:15.223088 3693658 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0821 14:48:15.226107 3693658 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0821 14:48:15.235541 3693658 certs.go:52] Setting up /home/k8s/.minikube/profiles/minikube for IP: 192.168.49.2 I0821 14:48:15.235569 3693658 certs.go:183] generating minikubeCA CA: /home/k8s/.minikube/ca.key I0821 14:48:15.397127 3693658 crypto.go:157] Writing cert to /home/k8s/.minikube/ca.crt ... I0821 14:48:15.397144 3693658 lock.go:36] WriteFile acquiring /home/k8s/.minikube/ca.crt: {Name:mk78049a0e4179336a027921a9db7ade040e6a7b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:15.397364 3693658 crypto.go:165] Writing key to /home/k8s/.minikube/ca.key ... I0821 14:48:15.397372 3693658 lock.go:36] WriteFile acquiring /home/k8s/.minikube/ca.key: {Name:mk642bae8fe1f0a9896f7431d55b571067bca5df Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:15.397450 3693658 certs.go:183] generating proxyClientCA CA: /home/k8s/.minikube/proxy-client-ca.key I0821 14:48:15.636546 3693658 crypto.go:157] Writing cert to /home/k8s/.minikube/proxy-client-ca.crt ... I0821 14:48:15.636562 3693658 lock.go:36] WriteFile acquiring /home/k8s/.minikube/proxy-client-ca.crt: {Name:mk64c0db0f962dee545c12983e5c9bb65bdd1aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:15.636803 3693658 crypto.go:165] Writing key to /home/k8s/.minikube/proxy-client-ca.key ... I0821 14:48:15.636810 3693658 lock.go:36] WriteFile acquiring /home/k8s/.minikube/proxy-client-ca.key: {Name:mk4675dfe6b292a1ea6601050dec7cbc727e492d Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:15.636946 3693658 certs.go:294] generating minikube-user signed cert: /home/k8s/.minikube/profiles/minikube/client.key I0821 14:48:15.636960 3693658 crypto.go:69] Generating cert /home/k8s/.minikube/profiles/minikube/client.crt with IP's: [] I0821 14:48:15.799638 3693658 crypto.go:157] Writing cert to /home/k8s/.minikube/profiles/minikube/client.crt ... I0821 14:48:15.799655 3693658 lock.go:36] WriteFile acquiring /home/k8s/.minikube/profiles/minikube/client.crt: {Name:mkd5f35936d649326a6826a51f781e9d9f35032b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:15.799826 3693658 crypto.go:165] Writing key to /home/k8s/.minikube/profiles/minikube/client.key ... I0821 14:48:15.799832 3693658 lock.go:36] WriteFile acquiring /home/k8s/.minikube/profiles/minikube/client.key: {Name:mkbb81396d16ee1588ac69d5f90e2fb2312d0031 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:15.799909 3693658 certs.go:294] generating minikube signed cert: /home/k8s/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0821 14:48:15.799913 3693658 crypto.go:69] Generating cert /home/k8s/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0821 14:48:15.878645 3693658 crypto.go:157] Writing cert to /home/k8s/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0821 14:48:15.878660 3693658 lock.go:36] WriteFile acquiring /home/k8s/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk387c1d769045fc6f12340b0c2e8725e492ad8a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:15.878836 3693658 crypto.go:165] Writing key to /home/k8s/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0821 14:48:15.878847 3693658 lock.go:36] WriteFile acquiring /home/k8s/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk611c37c38c2670d76158d473d6b1171303a589 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:15.878918 3693658 certs.go:305] copying /home/k8s/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/k8s/.minikube/profiles/minikube/apiserver.crt I0821 14:48:15.879003 3693658 certs.go:309] copying /home/k8s/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/k8s/.minikube/profiles/minikube/apiserver.key I0821 14:48:15.879052 3693658 certs.go:294] generating aggregator signed cert: /home/k8s/.minikube/profiles/minikube/proxy-client.key I0821 14:48:15.879057 3693658 crypto.go:69] Generating cert /home/k8s/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0821 14:48:15.973731 3693658 crypto.go:157] Writing cert to /home/k8s/.minikube/profiles/minikube/proxy-client.crt ... I0821 14:48:15.973746 3693658 lock.go:36] WriteFile acquiring /home/k8s/.minikube/profiles/minikube/proxy-client.crt: {Name:mk6898ce31362211f2088303d2f9c3bf160d66f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:15.973935 3693658 crypto.go:165] Writing key to /home/k8s/.minikube/profiles/minikube/proxy-client.key ... I0821 14:48:15.973941 3693658 lock.go:36] WriteFile acquiring /home/k8s/.minikube/profiles/minikube/proxy-client.key: {Name:mk441129c250e02c6456a98f351a14ec076efcb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:15.974138 3693658 certs.go:369] found cert: /home/k8s/.minikube/certs/home/k8s/.minikube/certs/ca-key.pem (1675 bytes) I0821 14:48:15.974173 3693658 certs.go:369] found cert: /home/k8s/.minikube/certs/home/k8s/.minikube/certs/ca.pem (1070 bytes) I0821 14:48:15.974195 3693658 certs.go:369] found cert: /home/k8s/.minikube/certs/home/k8s/.minikube/certs/cert.pem (1111 bytes) I0821 14:48:15.974215 3693658 certs.go:369] found cert: /home/k8s/.minikube/certs/home/k8s/.minikube/certs/key.pem (1675 bytes) I0821 14:48:15.975469 3693658 ssh_runner.go:316] scp /home/k8s/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0821 14:48:15.996196 3693658 ssh_runner.go:316] scp /home/k8s/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0821 14:48:16.015052 3693658 ssh_runner.go:316] scp /home/k8s/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0821 14:48:16.033851 3693658 ssh_runner.go:316] scp /home/k8s/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0821 14:48:16.052541 3693658 ssh_runner.go:316] scp /home/k8s/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0821 14:48:16.071571 3693658 ssh_runner.go:316] scp /home/k8s/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0821 14:48:16.089771 3693658 ssh_runner.go:316] scp /home/k8s/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0821 14:48:16.107908 3693658 ssh_runner.go:316] scp /home/k8s/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0821 14:48:16.126389 3693658 ssh_runner.go:316] scp /home/k8s/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0821 14:48:16.144717 3693658 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0821 14:48:16.158040 3693658 ssh_runner.go:149] Run: openssl version I0821 14:48:16.163747 3693658 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0821 14:48:16.172265 3693658 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0821 14:48:16.175554 3693658 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Aug 21 06:48 /usr/share/ca-certificates/minikubeCA.pem I0821 14:48:16.175587 3693658 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0821 14:48:16.181591 3693658 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0821 14:48:16.189933 3693658 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:kicbase/stable:v0.0.25@sha256:6f936e3443b95cd918d77623bf7b595653bb382766e280290a02b4a349e88b79 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0821 14:48:16.190035 3693658 sshrunner.go:149] Run: docker ps --filter status=paused --filter=name=k8s.*(kube-system) --format={{.ID}} I0821 14:48:16.239042 3693658 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0821 14:48:16.247433 3693658 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0821 14:48:16.254972 3693658 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver I0821 14:48:16.255007 3693658 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0821 14:48:16.262493 3693658 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout:

stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0821 14:48:16.262523 3693658 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0821 14:48:17.192597 3693658 out.go:192] ▪ Generating certificates and keys ... I0821 14:48:20.427081 3693658 out.go:192] ▪ Booting up control plane ... I0821 14:48:36.490053 3693658 out.go:192] ▪ Configuring RBAC rules ... I0821 14:48:36.907229 3693658 cni.go:93] Creating CNI manager for "" I0821 14:48:36.907244 3693658 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0821 14:48:36.907287 3693658 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0821 14:48:36.907423 3693658 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0821 14:48:36.907489 3693658 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.2/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=a03fbcf166e6f74ef224d4a63be4277d017bb62e minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_08_21T14_48_36_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0821 14:48:37.086282 3693658 ops.go:34] apiserver oom_adj: -16 I0821 14:48:37.086313 3693658 kubeadm.go:985] duration metric: took 178.937976ms to wait for elevateKubeSystemPrivileges. I0821 14:48:37.278793 3693658 kubeadm.go:392] StartCluster complete in 21.088866951s I0821 14:48:37.278830 3693658 settings.go:142] acquiring lock: {Name:mkdae8f3141afc32430f0ccedd7ee5abbd89dc87 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:37.278953 3693658 settings.go:150] Updating kubeconfig: /home/k8s/.kube/config I0821 14:48:37.280146 3693658 lock.go:36] WriteFile acquiring /home/k8s/.kube/config: {Name:mka686a246188495feda47c953e652776a25d51f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0821 14:48:37.797974 3693658 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I0821 14:48:37.798025 3693658 start.go:220] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.21.2 ControlPlane:true Worker:true} I0821 14:48:37.799584 3693658 out.go:165] 🔎 Verifying Kubernetes components... I0821 14:48:37.798066 3693658 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0821 14:48:37.799659 3693658 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0821 14:48:37.798099 3693658 addons.go:342] enableAddons start: toEnable=map[], additional=[] I0821 14:48:37.799730 3693658 addons.go:59] Setting storage-provisioner=true in profile "minikube" I0821 14:48:37.799750 3693658 addons.go:135] Setting addon storage-provisioner=true in "minikube" W0821 14:48:37.799756 3693658 addons.go:147] addon storage-provisioner should already be in state true I0821 14:48:37.799779 3693658 host.go:66] Checking if "minikube" exists ... I0821 14:48:37.799801 3693658 addons.go:59] Setting default-storageclass=true in profile "minikube" I0821 14:48:37.799821 3693658 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0821 14:48:37.800123 3693658 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0821 14:48:37.800241 3693658 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0821 14:48:37.866019 3693658 out.go:165] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0821 14:48:37.866181 3693658 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml I0821 14:48:37.866194 3693658 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0821 14:48:37.866268 3693658 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0821 14:48:37.884793 3693658 addons.go:135] Setting addon default-storageclass=true in "minikube" W0821 14:48:37.884807 3693658 addons.go:147] addon default-storageclass should already be in state true I0821 14:48:37.884838 3693658 host.go:66] Checking if "minikube" exists ... I0821 14:48:37.885381 3693658 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} I0821 14:48:37.920355 3693658 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf./i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.21.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0821 14:48:37.922135 3693658 api_server.go:50] waiting for apiserver process to appear ... I0821 14:48:37.922167 3693658 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.minikube.* I0821 14:48:37.931472 3693658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/k8s/.minikube/machines/minikube/id_rsa Username:docker} I0821 14:48:37.967070 3693658 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml I0821 14:48:37.967086 3693658 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0821 14:48:37.967145 3693658 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0821 14:48:38.021934 3693658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49242 SSHKeyPath:/home/k8s/.minikube/machines/minikube/id_rsa Username:docker} I0821 14:48:38.080799 3693658 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0821 14:48:38.179843 3693658 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0821 14:48:38.481502 3693658 start.go:730] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0821 14:48:38.481576 3693658 api_server.go:70] duration metric: took 683.520266ms to wait for apiserver process to appear ... I0821 14:48:38.481591 3693658 api_server.go:86] waiting for apiserver healthz status ... I0821 14:48:38.481607 3693658 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0821 14:48:38.487108 3693658 api_server.go:265] https://192.168.49.2:8443/healthz returned 200: ok I0821 14:48:38.488105 3693658 api_server.go:139] control plane version: v1.21.2 I0821 14:48:38.488117 3693658 api_server.go:129] duration metric: took 6.520539ms to wait for apiserver health ... I0821 14:48:38.488132 3693658 system_pods.go:43] waiting for kube-system pods to appear ... I0821 14:48:38.496134 3693658 system_pods.go:59] 0 kube-system pods found I0821 14:48:38.496173 3693658 retry.go:31] will retry after 199.621189ms: only 0 pod(s) have shown up I0821 14:48:38.672950 3693658 out.go:165] 🌟 Enabled addons: storage-provisioner, default-storageclass I0821 14:48:38.672981 3693658 addons.go:344] enableAddons completed in 874.897666ms I0821 14:48:38.700740 3693658 system_pods.go:59] 1 kube-system pods found I0821 14:48:38.700762 3693658 system_pods.go:61] "storage-provisioner" [6ba50127-4536-462f-af9b-b097490a584e] Pending I0821 14:48:38.700810 3693658 retry.go:31] will retry after 281.392478ms: only 1 pod(s) have shown up I0821 14:48:38.985707 3693658 system_pods.go:59] 1 kube-system pods found I0821 14:48:38.985730 3693658 system_pods.go:61] "storage-provisioner" [6ba50127-4536-462f-af9b-b097490a584e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0821 14:48:38.985743 3693658 retry.go:31] will retry after 393.596217ms: only 1 pod(s) have shown up I0821 14:48:39.383309 3693658 system_pods.go:59] 1 kube-system pods found I0821 14:48:39.383328 3693658 system_pods.go:61] "storage-provisioner" [6ba50127-4536-462f-af9b-b097490a584e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0821 14:48:39.383340 3693658 retry.go:31] will retry after 564.853506ms: only 1 pod(s) have shown up I0821 14:48:43.875519 3693658 system_pods.go:59] 1 kube-system pods found I0821 14:48:43.875540 3693658 system_pods.go:61] "storage-provisioner" [6ba50127-4536-462f-af9b-b097490a584e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0821 14:48:43.875552 3693658 retry.go:31] will retry after 523.151816ms: only 1 pod(s) have shown up I0821 14:48:44.403778 3693658 system_pods.go:59] 5 kube-system pods found I0821 14:48:44.403797 3693658 system_pods.go:61] "etcd-minikube" [22b4990e-6c6a-4669-b507-6e2d1d092545] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0821 14:48:44.403801 3693658 system_pods.go:61] "kube-apiserver-minikube" [5e5abf95-dec5-4002-a2bc-05f837e24c01] Pending I0821 14:48:44.403806 3693658 system_pods.go:61] "kube-controller-manager-minikube" [dc3cf66d-7a52-4767-839e-5dab48ee82c2] Pending I0821 14:48:44.403812 3693658 system_pods.go:61] "kube-scheduler-minikube" [129f9e45-862b-45f1-a121-35cd9baebb53] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0821 14:48:44.403817 3693658 system_pods.go:61] "storage-provisioner" [6ba50127-4536-462f-af9b-b097490a584e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0821 14:48:44.403823 3693658 system_pods.go:74] duration metric: took 5.91568607s to wait for pod list to return data ... I0821 14:48:44.403832 3693658 kubeadm.go:547] duration metric: took 6.605778954s to wait for : map[apiserver:true system_pods:true] ... I0821 14:48:44.403851 3693658 node_conditions.go:102] verifying NodePressure condition ... I0821 14:48:44.407175 3693658 node_conditions.go:122] node storage ephemeral capacity is 103078876Ki I0821 14:48:44.407191 3693658 node_conditions.go:123] node cpu capacity is 4 I0821 14:48:44.407207 3693658 node_conditions.go:105] duration metric: took 3.353247ms to run NodePressure ... I0821 14:48:44.407216 3693658 start.go:225] waiting for startup goroutines ... I0821 14:48:44.449419 3693658 start.go:462] kubectl: 1.22.0, cluster: 1.21.2 (minor skew: 1) I0821 14:48:44.451286 3693658 out.go:165] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

[Aug 5 08:26] ACPI: RSDP 00000000000f57e0 00014 (v00 BOCHS ) [ +0.000000] ACPI: RSDT 00000000bffe1b30 00034 (v01 BOCHS BXPCRSDT 00000001 BXPC 00000001) [ +0.000000] ACPI: FACP 00000000bffe0bfe 00074 (v01 BOCHS BXPCFACP 00000001 BXPC 00000001) [ +0.000000] ACPI: DSDT 00000000bffe0040 00BBE (v01 BOCHS BXPCDSDT 00000001 BXPC 00000001) [ +0.000000] ACPI: FACS 00000000bffe0000 00040 [ +0.000000] ACPI: SSDT 00000000bffe0c72 00D46 (v01 BOCHS BXPCSSDT 00000001 BXPC 00000001) [ +0.000000] ACPI: APIC 00000000bffe19b8 00090 (v01 BOCHS BXPCAPIC 00000001 BXPC 00000001) [ +0.000000] ACPI: SRAT 00000000bffe1a48 000E8 (v01 BOCHS BXPCSRAT 00000001 BXPC 00000001) [ +0.000000] Zone ranges: [ +0.000000] DMA [mem 0x00001000-0x00ffffff] [ +0.000000] DMA32 [mem 0x01000000-0xffffffff] [ +0.000000] Normal [mem 0x100000000-0x23fffffff] [ +0.000000] Movable zone start for each node [ +0.000000] Early memory node ranges [ +0.000000] node 0: [mem 0x00001000-0x0009efff] [ +0.000000] node 0: [mem 0x00100000-0xbffdffff] [ +0.000000] node 0: [mem 0x100000000-0x23fffffff] [ +0.000000] Built 1 zonelists in Zone order, mobility grouping on. Total pages: 2064233 [ +0.000000] Policy zone: Normal [ +0.000000] ACPI: All ACPI Tables successfully acquired [ +0.086531] ACPI: Executed 2 blocks of module-level executable AML code [ +0.000714] ACPI: Enabled 16 GPEs in block 00 to 0F [ +0.002332] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ +0.106394] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11 [ +0.301595] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ +0.385110] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 10 [ +0.091306] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10 [ +0.078787] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 11 [Aug 5 08:29] AliSecGuard: loading out-of-tree module taints kernel. [Aug 5 08:36] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). [ +0.000005] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ +1.502272] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ +0.000004] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ +3.798976] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). [ +0.000005] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ +0.244127] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ +0.000004] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ +36.009946] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). [ +0.000004] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ +0.337480] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ +0.000005] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [Aug 5 08:37] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). [ +0.000004] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ +1.534397] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ +0.000006] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ +3.291819] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). [ +0.000005] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ +0.139214] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ +0.000005] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [Aug 5 08:39] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). [ +0.000006] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ +0.202369] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ +0.000005] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [Aug 5 08:43] TECH PREVIEW: Overlay filesystem may not be fully supported. Please review provided documentation for limitations. [Aug 7 11:55] atkbd serio0: Unknown key pressed (translated set 2, code 0x0 on isa0060/serio0). [ +0.000016] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [ +0.297537] atkbd serio0: Unknown key released (translated set 2, code 0x0 on isa0060/serio0). [ +0.000005] atkbd serio0: Use 'setkeycodes 00 ' to make it known. [Aug 7 12:19] TECH PREVIEW: nf_tables may not be fully supported. Please review provided documentation for limitations. -- Logs begin at Sat 2021-08-21 06:48:03 UTC. -- Aug 21 06:58:11 minikube kubelet[2346]: E0821 06:58:11.263685 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 06:58:25 minikube kubelet[2346]: E0821 06:58:25.264324 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 06:58:38 minikube kubelet[2346]: E0821 06:58:38.263263 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 06:58:51 minikube kubelet[2346]: E0821 06:58:51.263740 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 06:59:04 minikube kubelet[2346]: E0821 06:59:04.264347 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 06:59:16 minikube kubelet[2346]: E0821 06:59:16.263447 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 06:59:27 minikube kubelet[2346]: E0821 06:59:27.264769 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 06:59:41 minikube kubelet[2346]: E0821 06:59:41.264477 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 06:59:56 minikube kubelet[2346]: E0821 06:59:56.264159 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 07:00:11 minikube kubelet[2346]: E0821 07:00:11.263991 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad -- No entries -- CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 359cb521bb0e1 jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 10 minutes ago Exited patch 0 0e7855c825a97 d45d96f2d92c3 jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 10 minutes ago Exited create 0 dc29314ebca10 41e68dea282d1 .:53 6e38f40d628db [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 CoreDNS-1.8.0 linux/amd64, go1.15.3, 054c9ae 11 minutes ago Running storage-provisioner 1 2cc8ac18b8a5b 488b50c7c3275 296a6d5035e2d 11 minutes ago Running coredns 0 57a9a734a4ef1 842a15b27b7d7 a6ebd1c1ad981 11 minutes ago Running kube-proxy 0 b9ff0d42e1d8f 3b2a630ee3be6 6e38f40d628db 11 minutes ago Exited storage-provisioner 0 2cc8ac18b8a5b a1992f15aae18 ae24db9aa2cc0 11 minutes ago Running kube-controller-manager 0 a70fe4e82c6fb 6efbd9bc7a4b5 0369cf4303ffd 11 minutes ago Running etcd 0 55a6825a2ba42 3cf10f688c120 106ff58d43082 11 minutes ago Running kube-apiserver 0 4587f088ad519 9a1bc2be57670 f917b8c8f55b7 11 minutes ago Running kube-scheduler 0 15614647205b5 Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=a03fbcf166e6f74ef224d4a63be4277d017bb62e minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_08_21T14_48_36_0700 minikube.k8s.io/version=v1.22.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 21 Aug 2021 06:48:33 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Sat, 21 Aug 2021 07:00:17 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Sat, 21 Aug 2021 07:00:18 +0000 Sat, 21 Aug 2021 06:48:30 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 21 Aug 2021 07:00:18 +0000 Sat, 21 Aug 2021 06:48:30 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 21 Aug 2021 07:00:18 +0000 Sat, 21 Aug 2021 06:48:30 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 21 Aug 2021 07:00:18 +0000 Sat, 21 Aug 2021 06:48:33 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 4 ephemeral-storage: 103078876Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8008872Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 103078876Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8008872Ki pods: 110 System Info: Machine ID: 760e67beb8554645829f2357c8eb4ae7 System UUID: 1a44498b-0c8a-42d5-8b32-2fff4c89e1a7 Boot ID: b7219046-486f-42e7-a904-4b212a079609 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.7 Kubelet Version: v1.21.2 Kube-Proxy Version: v1.21.2 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age


ingress-nginx ingress-nginx-controller-59b45fb494-2tcfb 100m (2%) 0 (0%) 90Mi (1%) 0 (0%) 11m kube-system coredns-558bd4d5db-hh5hv 100m (2%) 0 (0%) 70Mi (0%) 170Mi (2%) 11m kube-system etcd-minikube 100m (2%) 0 (0%) 100Mi (1%) 0 (0%) 11m kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 11m kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 11m kube-system kube-proxy-9jpc8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 11m kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits


cpu 850m (21%) 0 (0%) memory 260Mi (3%) 170Mi (2%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message


Normal NodeHasSufficientMemory 11m (x5 over 11m) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 11m (x5 over 11m) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 11m (x4 over 11m) kubelet Node minikube status is now: NodeHasSufficientPID Normal Starting 11m kubelet Starting kubelet. Normal NodeHasSufficientMemory 11m kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 11m kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 11m kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 11m kubelet Updated Node Allocatable limit across pods Normal Starting 11m kube-proxy Starting kube-proxy. Aug 21 07:00:23 minikube kubelet[2346]: E0821 07:00:23.264860 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 07:00:38 minikube kubelet[2346]: E0821 07:00:38.264157 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 07:00:49 minikube kubelet[2346]: E0821 07:00:49.264324 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 07:01:00 minikube kubelet[2346]: E0821 07:01:00.264173 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 07:01:14 minikube kubelet[2346]: E0821 07:01:14.263454 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 07:01:25 minikube kubelet[2346]: E0821 07:01:25.263352 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 07:01:36 minikube kubelet[2346]: E0821 07:01:36.264560 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad Aug 21 07:01:50 minikube kubelet[2346]: E0821 07:01:50.264303 2346 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with ImagePullBackOff: \"Back-off pulling image \\"k8s.gcr.io/ingress-nginx/controller:v0.44.0@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a\\"\"" pod="ingress-nginx/ingress-nginx-controller-59b45fb494-2tcfb" podUID=b658f80d-553d-4ceb-b612-02052ef45fad

jackhenhao commented 3 years ago

重现问题所需的命令:minikube addons enable ingress

失败的命令的完整输出

minikube logs命令的输出

使用的操作系统版本: ubuntu20.04

  1. upgrade to latest version.
  2. check pod events -> must pull image sucess;