kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.17k stars 4.87k forks source link

minikube v1.18.1 not finishing on Fedora 33 using Docker-CE #10754

Closed FilBot3 closed 2 years ago

FilBot3 commented 3 years ago

Details

Steps to reproduce the issue:

  1. Uninstall any traces of Docker, if there are any
  2. Install Docker-CE
  3. Add user to docker group, logout and in.
  4. Stop CRI-O Socket
  5. Start Docker Socket
  6. Attempt to start Minikube using Docker driver.

Full output of failed command:

Full log output was too large. Had to move to a Gist.

Optional: Full output of minikube logs command:

➜ ~ minikube logs 2>&1 | tee minikube_2021-03-08_Fedora-33_on_docker_log.log ==> Docker <== -- Logs begin at Mon 2021-03-08 15:26:09 UTC, end at Mon 2021-03-08 15:40:51 UTC. -- Mar 08 15:26:10 minikube systemd[1]: Starting Docker Application Container Engine... Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.303529236Z" level=info msg="Starting up" Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.304744093Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.304771773Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.304797624Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.304818666Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.306096022Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.306117433Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.306146319Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.306158117Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.341343472Z" level=info msg="Loading containers: start." Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.648226691Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.876665346Z" level=info msg="Loading containers: done." Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.904113788Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=btrfs version=20.10.3 Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.904288353Z" level=info msg="Daemon has completed initialization" Mar 08 15:26:10 minikube systemd[1]: Started Docker Application Container Engine. Mar 08 15:26:10 minikube dockerd[148]: time="2021-03-08T15:26:10.952005907Z" level=info msg="API listen on /run/docker.sock" Mar 08 15:26:11 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Mar 08 15:26:12 minikube systemd[1]: Stopping Docker Application Container Engine... Mar 08 15:26:12 minikube dockerd[148]: time="2021-03-08T15:26:12.059972081Z" level=info msg="Processing signal 'terminated'" Mar 08 15:26:12 minikube dockerd[148]: time="2021-03-08T15:26:12.060814429Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Mar 08 15:26:12 minikube dockerd[148]: time="2021-03-08T15:26:12.061317167Z" level=info msg="Daemon shutdown complete" Mar 08 15:26:12 minikube systemd[1]: docker.service: Succeeded. Mar 08 15:26:12 minikube systemd[1]: Stopped Docker Application Container Engine. Mar 08 15:26:12 minikube systemd[1]: Starting Docker Application Container Engine... Mar 08 15:26:12 minikube dockerd[398]: time="2021-03-08T15:26:12.118346358Z" level=info msg="Starting up" Mar 08 15:26:12 minikube dockerd[398]: time="2021-03-08T15:26:12.120090429Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 08 15:26:12 minikube dockerd[398]: time="2021-03-08T15:26:12.120116330Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 08 15:26:12 minikube dockerd[398]: time="2021-03-08T15:26:12.120153423Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 08 15:26:12 minikube dockerd[398]: time="2021-03-08T15:26:12.120170287Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 08 15:26:12 minikube dockerd[398]: time="2021-03-08T15:26:12.121190926Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 08 15:26:12 minikube dockerd[398]: time="2021-03-08T15:26:12.121214961Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 08 15:26:12 minikube dockerd[398]: time="2021-03-08T15:26:12.121235963Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Mar 08 15:26:12 minikube dockerd[398]: time="2021-03-08T15:26:12.121256749Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 08 15:26:12 minikube dockerd[398]: time="2021-03-08T15:26:12.149613638Z" level=info msg="Loading containers: start." Mar 08 15:26:12 minikube dockerd[398]: time="2021-03-08T15:26:12.754084571Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 08 15:26:12 minikube dockerd[398]: time="2021-03-08T15:26:12.988673146Z" level=info msg="Loading containers: done." Mar 08 15:26:13 minikube dockerd[398]: time="2021-03-08T15:26:13.010916550Z" level=info msg="Docker daemon" commit=46229ca graphdriver(s)=btrfs version=20.10.3 Mar 08 15:26:13 minikube dockerd[398]: time="2021-03-08T15:26:13.010986456Z" level=info msg="Daemon has completed initialization" Mar 08 15:26:13 minikube systemd[1]: Started Docker Application Container Engine. Mar 08 15:26:13 minikube dockerd[398]: time="2021-03-08T15:26:13.051781938Z" level=info msg="API listen on [::]:2376" Mar 08 15:26:13 minikube dockerd[398]: time="2021-03-08T15:26:13.059294443Z" level=info msg="API listen on /var/run/docker.sock" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID ==> describe nodes <== ==> dmesg <== [Mar 8 15:12] systemd[1]: /usr/lib/systemd/system/plymouth-start.service:15: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. [ +0.344409] i2c_hid i2c-PNP0C50:00: supply vdd not found, using dummy regulator [ +0.000043] i2c_hid i2c-PNP0C50:00: supply vddl not found, using dummy regulator [ +4.090742] psmouse serio1: synaptics: Unable to query device: -5 [ +5.808006] psmouse serio1: Failed to enable mouse on isa0060/serio1 [ +0.885538] kauditd_printk_skb: 18 callbacks suppressed [ +1.302707] systemd-sysv-generator[988]: SysV service '/etc/rc.d/init.d/livesys' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.000049] systemd-sysv-generator[988]: SysV service '/etc/rc.d/init.d/livesys-late' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. [ +0.095507] systemd[1]: /usr/lib/systemd/system/plymouth-start.service:15: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. [ +0.457583] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +0.040173] system76: loading out-of-tree module taints kernel. [ +0.031877] iwlwifi 0000:00:14.3: api flags index 2 larger than supported by driver [ +0.491417] thermal thermal_zone2: failed to read out thermal zone (-61) ==> kernel <== 15:40:52 up 28 min, 0 users, load average: 0.76, 1.39, 1.30 Linux minikube 5.10.19-200.fc33.x86_64 #1 SMP Fri Feb 26 16:21:30 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.1 LTS" ==> kubelet <== -- Logs begin at Mon 2021-03-08 15:26:09 UTC, end at Mon 2021-03-08 15:40:52 UTC. -- Mar 08 15:40:50 minikube kubelet[120659]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:357 +0xd4 Mar 08 15:40:50 minikube kubelet[120659]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*manager).Start Mar 08 15:40:50 minikube kubelet[120659]: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/manager.go:323 +0x608 Mar 08 15:40:51 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 412. Mar 08 15:40:51 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Mar 08 15:40:51 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.812971 121003 server.go:416] Version: v1.20.2 Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.813255 121003 server.go:837] Client rotation is on, will bootstrap in background Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.815285 121003 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 08 15:40:51 minikube kubelet[121003]: W0308 15:40:51.816315 121003 manager.go:159] Cannot detect current cgroup on cgroup v2 Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.816321 121003 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt Mar 08 15:40:51 minikube kubelet[121003]: W0308 15:40:51.876446 121003 fs.go:208] stat failed on /dev/mapper/luks-7b9b5355-def8-4c5d-a023-c3e66a3ab38d with error: no such file or directory Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.902462 121003 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.902620 121003 container_manager_linux.go:274] container manager verified user specified cgroup-root exists: [] Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.902641 121003 container_manager_linux.go:279] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.902687 121003 topology_manager.go:120] [topologymanager] Creating topology manager with none policy per container scope Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.902696 121003 container_manager_linux.go:310] [topologymanager] Initializing Topology Manager with none policy and container-level scope Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.902702 121003 container_manager_linux.go:315] Creating device plugin manager: true Mar 08 15:40:51 minikube kubelet[121003]: W0308 15:40:51.902751 121003 kubelet.go:297] Using dockershim is deprecated, please consider using a full-fledged CRI implementation Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.902773 121003 client.go:77] Connecting to docker on unix:///var/run/docker.sock Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.902783 121003 client.go:94] Start docker client with request timeout=2m0s Mar 08 15:40:51 minikube kubelet[121003]: W0308 15:40:51.912400 121003 docker_service.go:559] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.912438 121003 docker_service.go:240] Hairpin mode set to "hairpin-veth" Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.918432 121003 docker_service.go:255] Docker cri networking managed by kubernetes.io/no-op Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.928436 121003 docker_service.go:260] Docker Info: &{ID:SAPR:EYGW:PAHM:MS2I:7BPE:EROB:FTUB:IGPE:QO6K:WN5J:DTDG:YQIJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:btrfs DriverStatus:[[Build Version Btrfs v5.4.1 ] [Library Version 102]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:false NGoroutines:35 SystemTime:2021-03-08T15:40:51.918876195Z LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.10.19-200.fc33.x86_64 OperatingSystem:Ubuntu 20.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000a43880 NCPU:16 MemTotal:33530384384 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:minikube Labels:[provider=docker] ExperimentalBuild:false ServerVersion:20.10.3 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[]} io.containerd.runtime.v1.linux:{Path:runc Args:[]} runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:[WARNING: No kernel memory TCP limit support WARNING: No oom kill disable support WARNING: Support for cgroup v2 is experimental]} Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.928525 121003 docker_service.go:273] Setting cgroupDriver to systemd Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.939375 121003 remote_runtime.go:62] parsed scheme: "" Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.939398 121003 remote_runtime.go:62] scheme "" not registered, fallback to default scheme Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.939431 121003 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] } Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.939441 121003 clientconn.go:948] ClientConn switching balancer to "pick_first" Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.939485 121003 remote_image.go:50] parsed scheme: "" Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.939494 121003 remote_image.go:50] scheme "" not registered, fallback to default scheme Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.939504 121003 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] } Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.939511 121003 clientconn.go:948] ClientConn switching balancer to "pick_first" Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.939542 121003 kubelet.go:262] Adding pod path: /etc/kubernetes/manifests Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.939570 121003 kubelet.go:273] Watching apiserver Mar 08 15:40:51 minikube kubelet[121003]: E0308 15:40:51.940513 121003 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused Mar 08 15:40:51 minikube kubelet[121003]: E0308 15:40:51.940533 121003 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused Mar 08 15:40:51 minikube kubelet[121003]: E0308 15:40:51.941504 121003 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused Mar 08 15:40:51 minikube kubelet[121003]: I0308 15:40:51.950587 121003 kuberuntime_manager.go:216] Container runtime docker initialized, version: 20.10.3, apiVersion: 1.41.0 Mar 08 15:40:52 minikube kubelet[121003]: E0308 15:40:52.254312 121003 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated. Mar 08 15:40:52 minikube kubelet[121003]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.254845 121003 server.go:1176] Started kubelet Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.255095 121003 server.go:148] Starting to listen on 0.0.0.0:10250 Mar 08 15:40:52 minikube kubelet[121003]: E0308 15:40:52.255676 121003 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"minikube.166a6842bf72673d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"minikube", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc009af410f303f3d, ext:520973451, loc:(*time.Location)(0x70d1080)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc009af410f303f3d, ext:520973451, loc:(*time.Location)(0x70d1080)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events": dial tcp 192.168.49.2:8443: connect: connection refused'(may retry after sleeping) Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.256180 121003 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.256246 121003 volume_manager.go:271] Starting Kubelet Volume Manager Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.256351 121003 desired_state_of_world_populator.go:142] Desired state populator starts to run Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.256624 121003 server.go:410] Adding debug handlers to kubelet server. Mar 08 15:40:52 minikube kubelet[121003]: E0308 15:40:52.256883 121003 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused Mar 08 15:40:52 minikube kubelet[121003]: E0308 15:40:52.257154 121003 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.268446 121003 client.go:86] parsed scheme: "unix" Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.268468 121003 client.go:86] scheme "unix" not registered, fallback to default scheme Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.268524 121003 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] } Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.268540 121003 clientconn.go:948] ClientConn switching balancer to "pick_first" Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.273531 121003 kubelet_network_linux.go:56] Initialized IPv4 iptables rules. Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.273572 121003 status_manager.go:158] Starting to sync pod status with apiserver Mar 08 15:40:52 minikube kubelet[121003]: I0308 15:40:52.273610 121003 kubelet.go:1802] Starting kubelet main sync loop. Mar 08 15:40:52 minikube kubelet[121003]: E0308 15:40:52.273650 121003 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] Mar 08 15:40:52 minikube kubelet[121003]: E0308 15:40:52.274305 121003 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused ==> Audit <== |---------|-------------------|----------|--------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|-------------------|----------|--------|---------|-------------------------------|-------------------------------| | config | set driver podman | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 13:47:58 CST | Sat, 06 Mar 2021 13:47:58 CST | | stop | | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 13:59:33 CST | Sat, 06 Mar 2021 13:59:34 CST | | delete | | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 13:59:39 CST | Sat, 06 Mar 2021 13:59:42 CST | | stop | | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:12:02 CST | Sat, 06 Mar 2021 14:12:04 CST | | delete | | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:12:10 CST | Sat, 06 Mar 2021 14:12:12 CST | | stop | | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:20:54 CST | Sat, 06 Mar 2021 14:20:55 CST | | delete | | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:20:58 CST | Sat, 06 Mar 2021 14:21:00 CST | | stop | | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:28:59 CST | Sat, 06 Mar 2021 14:29:00 CST | | delete | | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:29:03 CST | Sat, 06 Mar 2021 14:29:05 CST | | stop | | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:54:30 CST | Sat, 06 Mar 2021 14:54:31 CST | | delete | | minikube | filbot | v1.18.1 | Sat, 06 Mar 2021 14:54:35 CST | Sat, 06 Mar 2021 14:54:37 CST | | config | set driver docker | minikube | filbot | v1.18.1 | Mon, 08 Mar 2021 09:13:40 CST | Mon, 08 Mar 2021 09:13:40 CST | | delete | | minikube | filbot | v1.18.1 | Mon, 08 Mar 2021 09:13:50 CST | Mon, 08 Mar 2021 09:13:51 CST | | stop | | minikube | filbot | v1.18.1 | Mon, 08 Mar 2021 09:19:00 CST | Mon, 08 Mar 2021 09:19:01 CST | | delete | | minikube | filbot | v1.18.1 | Mon, 08 Mar 2021 09:19:04 CST | Mon, 08 Mar 2021 09:19:08 CST | | stop | | minikube | filbot | v1.18.1 | Mon, 08 Mar 2021 09:25:33 CST | Mon, 08 Mar 2021 09:25:34 CST | | delete | | minikube | filbot | v1.18.1 | Mon, 08 Mar 2021 09:25:39 CST | Mon, 08 Mar 2021 09:25:43 CST | |---------|-------------------|----------|--------|---------|-------------------------------|-------------------------------| ==> Last Start <== Log file created at: 2021/03/08 09:26:05 Running on machine: oryx-fedora Binary: Built with gc go1.16 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0308 09:26:05.752331 82739 out.go:239] Setting OutFile to fd 1 ... I0308 09:26:05.752426 82739 out.go:291] isatty.IsTerminal(1) = false I0308 09:26:05.752432 82739 out.go:252] Setting ErrFile to fd 2... I0308 09:26:05.752436 82739 out.go:291] isatty.IsTerminal(2) = false I0308 09:26:05.752611 82739 root.go:308] Updating PATH: /home/filbot/.minikube/bin I0308 09:26:05.752937 82739 out.go:246] Setting JSON to false I0308 09:26:05.770476 82739 start.go:108] hostinfo: {"hostname":"oryx-fedora","uptime":813,"bootTime":1615216353,"procs":437,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"33","kernelVersion":"5.10.19-200.fc33.x86_64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"7872287a-f4bf-4f92-80bf-61cfbfe48c7e"} I0308 09:26:05.771274 82739 start.go:118] virtualization: kvm host I0308 09:26:05.813088 82739 out.go:129] * minikube v1.18.1 on Fedora 33 I0308 09:26:05.813419 82739 notify.go:126] Checking for updates... I0308 09:26:05.813728 82739 driver.go:323] Setting default libvirt URI to qemu:///system I0308 09:26:05.877206 82739 docker.go:118] docker version: linux-20.10.5 I0308 09:26:05.877518 82739 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0308 09:26:05.970972 82739 info.go:253] docker info: {ID:ND34:PDT5:B6C4:ZGSH:JMA2:3XZ4:FO7E:T2JP:MBCW:YLKW:DPXM:J5CJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:btrfs DriverStatus:[[Build Version Btrfs v5.10 ] [Library Version 102]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:false NGoroutines:37 SystemTime:2021-03-08 09:26:05.91677521 -0600 CST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.10.19-200.fc33.x86_64 OperatingSystem:Fedora 33 (KDE Plasma) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33530384384 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:oryx-fedora Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:[WARNING: Support for cgroup v2 is experimental] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:}} I0308 09:26:05.971097 82739 docker.go:215] overlay module found I0308 09:26:05.993637 82739 out.go:129] * Using the docker driver based on user configuration I0308 09:26:05.993658 82739 start.go:276] selected driver: docker I0308 09:26:05.993664 82739 start.go:718] validating driver "docker" against I0308 09:26:05.993680 82739 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0308 09:26:05.993768 82739 cli_runner.go:115] Run: docker system info --format "{{json .}}" I0308 09:26:06.084552 82739 info.go:253] docker info: {ID:ND34:PDT5:B6C4:ZGSH:JMA2:3XZ4:FO7E:T2JP:MBCW:YLKW:DPXM:J5CJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:btrfs DriverStatus:[[Build Version Btrfs v5.10 ] [Library Version 102]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:false NGoroutines:37 SystemTime:2021-03-08 09:26:06.032734704 -0600 CST LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:5.10.19-200.fc33.x86_64 OperatingSystem:Fedora 33 (KDE Plasma) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33530384384 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:oryx-fedora Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:[WARNING: Support for cgroup v2 is experimental] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:}} W0308 09:26:06.084841 82739 out.go:191] ! docker is currently using the btrfs storage driver, consider switching to overlay2 for better performance I0308 09:26:06.084891 82739 start_flags.go:251] no existing cluster config was found, will generate one from the flags I0308 09:26:06.086716 82739 start_flags.go:269] Using suggested 7900MB memory alloc based on sys=31977MB, container=31977MB I0308 09:26:06.086922 82739 start_flags.go:696] Wait components to verify : map[apiserver:true system_pods:true] I0308 09:26:06.086963 82739 cni.go:74] Creating CNI manager for "" I0308 09:26:06.086977 82739 cni.go:140] CNI unnecessary in this configuration, recommending no CNI I0308 09:26:06.086985 82739 start_flags.go:395] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0308 09:26:06.088309 82739 out.go:129] * Starting control plane node minikube in cluster minikube I0308 09:26:06.138228 82739 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e in local docker daemon, skipping pull I0308 09:26:06.138270 82739 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e exists in daemon, skipping pull I0308 09:26:06.138298 82739 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker I0308 09:26:06.138588 82739 cache.go:93] acquiring lock: {Name:mk3e3cb89839d816d4ed3f3ad285f88172cacde7 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0308 09:26:06.138607 82739 cache.go:93] acquiring lock: {Name:mk22ac7850c019aae954e4b3c1757bb11775f699 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0308 09:26:06.138625 82739 cache.go:93] acquiring lock: {Name:mk699bd9b56c50ade7b78389b3195e550b388ea5 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0308 09:26:06.138669 82739 cache.go:93] acquiring lock: {Name:mke1ef5d5464e6a01af7c1a1e4009cca95c07385 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0308 09:26:06.138719 82739 cache.go:93] acquiring lock: {Name:mk7769c2b96f67751ed0c01024d022bbfa1c88d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0308 09:26:06.138734 82739 cache.go:93] acquiring lock: {Name:mkc2d2e0e913010cc505295b6921550a14bb266d Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0308 09:26:06.138753 82739 cache.go:93] acquiring lock: {Name:mkbf330765f362acab3fa882c78d5075e9314b40 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0308 09:26:06.138765 82739 cache.go:93] acquiring lock: {Name:mk9921f3da45f08f335670b3bc57da14409792ee Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0308 09:26:06.138788 82739 cache.go:93] acquiring lock: {Name:mk4e94b10ace2d5ba7640e9532fafd5a14b2c0d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0308 09:26:06.138805 82739 cache.go:93] acquiring lock: {Name:mk7f1d7be485774304598c8cf50c3345157c6ba0 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0308 09:26:06.138869 82739 profile.go:148] Saving config to /home/filbot/.minikube/profiles/minikube/config.json ... I0308 09:26:06.138903 82739 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/config.json: {Name:mkde552573cc4fe111badcbccdf8dc701af1839b Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0308 09:26:06.138961 82739 cache.go:101] /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.2 exists I0308 09:26:06.138961 82739 cache.go:101] /home/filbot/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists I0308 09:26:06.138986 82739 cache.go:101] /home/filbot/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists I0308 09:26:06.138988 82739 cache.go:82] cache image "k8s.gcr.io/kube-apiserver:v1.20.2" -> "/home/filbot/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.2" took 278.982µs I0308 09:26:06.138993 82739 cache.go:82] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/filbot/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 410.908µs I0308 09:26:06.139004 82739 cache.go:66] save to tar file k8s.gcr.io/kube-apiserver:v1.20.2 -> /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.2 succeeded I0308 09:26:06.139012 82739 cache.go:66] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/filbot/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded I0308 09:26:06.139011 82739 cache.go:101] /home/filbot/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 exists I0308 09:26:06.139010 82739 cache.go:82] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/filbot/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 276.407µs I0308 09:26:06.139026 82739 cache.go:101] /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.2 exists I0308 09:26:06.139035 82739 cache.go:101] /home/filbot/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 exists I0308 09:26:06.139038 82739 cache.go:66] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/filbot/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded I0308 09:26:06.139042 82739 cache.go:82] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "/home/filbot/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" took 290.403µs I0308 09:26:06.139057 82739 cache.go:66] save to tar file k8s.gcr.io/etcd:3.4.13-0 -> /home/filbot/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 succeeded I0308 09:26:06.139060 82739 cache.go:101] /home/filbot/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists I0308 09:26:06.139051 82739 cache.go:82] cache image "k8s.gcr.io/kube-scheduler:v1.20.2" -> "/home/filbot/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.2" took 436.482µs I0308 09:26:06.139071 82739 cache.go:66] save to tar file k8s.gcr.io/kube-scheduler:v1.20.2 -> /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.2 succeeded I0308 09:26:06.139066 82739 cache.go:82] cache image "k8s.gcr.io/coredns:1.7.0" -> "/home/filbot/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" took 464.592µs I0308 09:26:06.139070 82739 cache.go:185] Successfully downloaded all kic artifacts I0308 09:26:06.139078 82739 cache.go:66] save to tar file k8s.gcr.io/coredns:1.7.0 -> /home/filbot/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 succeeded I0308 09:26:06.139031 82739 cache.go:101] /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.2 exists I0308 09:26:06.139084 82739 cache.go:101] /home/filbot/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 exists I0308 09:26:06.139080 82739 cache.go:82] cache image "k8s.gcr.io/pause:3.2" -> "/home/filbot/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 389.308µs I0308 09:26:06.139095 82739 cache.go:66] save to tar file k8s.gcr.io/pause:3.2 -> /home/filbot/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded I0308 09:26:06.139097 82739 cache.go:101] /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.2 exists I0308 09:26:06.139094 82739 cache.go:82] cache image "k8s.gcr.io/kube-controller-manager:v1.20.2" -> "/home/filbot/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.2" took 456.216µs I0308 09:26:06.139096 82739 start.go:313] acquiring machines lock for minikube: {Name:mk74d8ec9998731abbf55af957eeb22e3767741c Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0308 09:26:06.139105 82739 cache.go:66] save to tar file k8s.gcr.io/kube-controller-manager:v1.20.2 -> /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.2 succeeded I0308 09:26:06.139103 82739 cache.go:82] cache image "gcr.io/k8s-minikube/storage-provisioner:v4" -> "/home/filbot/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4" took 408.562µs I0308 09:26:06.139117 82739 cache.go:66] save to tar file gcr.io/k8s-minikube/storage-provisioner:v4 -> /home/filbot/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 succeeded I0308 09:26:06.139116 82739 cache.go:82] cache image "k8s.gcr.io/kube-proxy:v1.20.2" -> "/home/filbot/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.2" took 417.674µs I0308 09:26:06.139135 82739 cache.go:66] save to tar file k8s.gcr.io/kube-proxy:v1.20.2 -> /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.2 succeeded I0308 09:26:06.139142 82739 cache.go:73] Successfully saved all images to host disk. I0308 09:26:06.139161 82739 start.go:317] acquired machines lock for "minikube" in 50.405µs I0308 09:26:06.139196 82739 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} I0308 09:26:06.139266 82739 start.go:126] createHost starting for "" (driver="docker") I0308 09:26:06.157149 82739 out.go:150] * Creating docker container (CPUs=2, Memory=7900MB) ... I0308 09:26:06.157439 82739 start.go:160] libmachine.API.Create for "minikube" (driver="docker") I0308 09:26:06.157464 82739 client.go:168] LocalClient.Create starting I0308 09:26:06.157588 82739 main.go:121] libmachine: Reading certificate data from /home/filbot/.minikube/certs/ca.pem I0308 09:26:06.157624 82739 main.go:121] libmachine: Decoding PEM data... I0308 09:26:06.157646 82739 main.go:121] libmachine: Parsing certificate... I0308 09:26:06.157780 82739 main.go:121] libmachine: Reading certificate data from /home/filbot/.minikube/certs/cert.pem I0308 09:26:06.157803 82739 main.go:121] libmachine: Decoding PEM data... I0308 09:26:06.157816 82739 main.go:121] libmachine: Parsing certificate... I0308 09:26:06.158244 82739 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0308 09:26:06.201088 82739 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0308 09:26:06.201168 82739 network_create.go:240] running [docker network inspect minikube] to gather additional debugging logs... I0308 09:26:06.201241 82739 cli_runner.go:115] Run: docker network inspect minikube W0308 09:26:06.241459 82739 cli_runner.go:162] docker network inspect minikube returned with exit code 1 I0308 09:26:06.241518 82739 network_create.go:243] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0308 09:26:06.241530 82739 network_create.go:245] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0308 09:26:06.241603 82739 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0308 09:26:06.281318 82739 network.go:193] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0308 09:26:06.281352 82739 network_create.go:91] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ... I0308 09:26:06.281393 82739 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube I0308 09:26:06.865374 82739 kic.go:101] calculated static IP "192.168.49.2" for the "minikube" container I0308 09:26:06.865784 82739 cli_runner.go:115] Run: docker ps -a --format I0308 09:26:06.924228 82739 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0308 09:26:06.980256 82739 oci.go:102] Successfully created a docker volume minikube I0308 09:26:06.980313 82739 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib I0308 09:26:08.046841 82739 cli_runner.go:168] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e -d /var/lib: (1.066477315s) I0308 09:26:08.046864 82739 oci.go:106] Successfully prepared a docker volume minikube I0308 09:26:08.046925 82739 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker W0308 09:26:08.047026 82739 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0308 09:26:08.047040 82739 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. W0308 09:26:08.047051 82739 oci.go:233] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted. I0308 09:26:08.047205 82739 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'" I0308 09:26:08.138903 82739 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e I0308 09:26:09.599371 82739 cli_runner.go:168] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: (1.460413259s) I0308 09:26:09.599594 82739 cli_runner.go:115] Run: docker container inspect minikube --format= I0308 09:26:09.646877 82739 cli_runner.go:115] Run: docker container inspect minikube --format= I0308 09:26:09.690873 82739 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0308 09:26:09.784241 82739 oci.go:278] the created container "minikube" has a running status. I0308 09:26:09.784264 82739 kic.go:199] Creating ssh key for kic: /home/filbot/.minikube/machines/minikube/id_rsa... I0308 09:26:09.914659 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys I0308 09:26:09.914700 82739 kic_runner.go:188] docker (temp): /home/filbot/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0308 09:26:10.029834 82739 cli_runner.go:115] Run: docker container inspect minikube --format= I0308 09:26:10.077300 82739 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0308 09:26:10.077341 82739 kic_runner.go:115] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0308 09:26:10.161764 82739 cli_runner.go:115] Run: docker container inspect minikube --format= I0308 09:26:10.209479 82739 machine.go:88] provisioning docker machine ... I0308 09:26:10.209510 82739 ubuntu.go:169] provisioning hostname "minikube" I0308 09:26:10.209558 82739 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0308 09:26:10.256816 82739 main.go:121] libmachine: Using SSH client type: native I0308 09:26:10.257072 82739 main.go:121] libmachine: &{{{ 0 [] [] []} docker [0x7fb7a0] 0x7fb760 [] 0s} 127.0.0.1 49157 } I0308 09:26:10.257092 82739 main.go:121] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0308 09:26:10.401498 82739 main.go:121] libmachine: SSH cmd err, output: : minikube I0308 09:26:10.401637 82739 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0308 09:26:10.458826 82739 main.go:121] libmachine: Using SSH client type: native I0308 09:26:10.459027 82739 main.go:121] libmachine: &{{{ 0 [] [] []} docker [0x7fb7a0] 0x7fb760 [] 0s} 127.0.0.1 49157 } I0308 09:26:10.459050 82739 main.go:121] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0308 09:26:10.584952 82739 main.go:121] libmachine: SSH cmd err, output: : I0308 09:26:10.585001 82739 ubuntu.go:175] set auth options {CertDir:/home/filbot/.minikube CaCertPath:/home/filbot/.minikube/certs/ca.pem CaPrivateKeyPath:/home/filbot/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/filbot/.minikube/machines/server.pem ServerKeyPath:/home/filbot/.minikube/machines/server-key.pem ClientKeyPath:/home/filbot/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/filbot/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/filbot/.minikube} I0308 09:26:10.585037 82739 ubuntu.go:177] setting up certificates I0308 09:26:10.585053 82739 provision.go:83] configureAuth start I0308 09:26:10.585183 82739 cli_runner.go:115] Run: docker container inspect -f "" minikube I0308 09:26:10.640265 82739 provision.go:137] copyHostCerts I0308 09:26:10.640294 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/certs/key.pem -> /home/filbot/.minikube/key.pem I0308 09:26:10.640317 82739 exec_runner.go:145] found /home/filbot/.minikube/key.pem, removing ... I0308 09:26:10.640322 82739 exec_runner.go:190] rm: /home/filbot/.minikube/key.pem I0308 09:26:10.640505 82739 exec_runner.go:152] cp: /home/filbot/.minikube/certs/key.pem --> /home/filbot/.minikube/key.pem (1679 bytes) I0308 09:26:10.640616 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/certs/ca.pem -> /home/filbot/.minikube/ca.pem I0308 09:26:10.640637 82739 exec_runner.go:145] found /home/filbot/.minikube/ca.pem, removing ... I0308 09:26:10.640642 82739 exec_runner.go:190] rm: /home/filbot/.minikube/ca.pem I0308 09:26:10.640705 82739 exec_runner.go:152] cp: /home/filbot/.minikube/certs/ca.pem --> /home/filbot/.minikube/ca.pem (1078 bytes) I0308 09:26:10.640773 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/certs/cert.pem -> /home/filbot/.minikube/cert.pem I0308 09:26:10.640792 82739 exec_runner.go:145] found /home/filbot/.minikube/cert.pem, removing ... I0308 09:26:10.640796 82739 exec_runner.go:190] rm: /home/filbot/.minikube/cert.pem I0308 09:26:10.640834 82739 exec_runner.go:152] cp: /home/filbot/.minikube/certs/cert.pem --> /home/filbot/.minikube/cert.pem (1123 bytes) I0308 09:26:10.640901 82739 provision.go:111] generating server cert: /home/filbot/.minikube/machines/server.pem ca-key=/home/filbot/.minikube/certs/ca.pem private-key=/home/filbot/.minikube/certs/ca-key.pem org=filbot.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0308 09:26:10.799979 82739 provision.go:165] copyRemoteCerts I0308 09:26:10.800164 82739 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0308 09:26:10.800219 82739 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0308 09:26:10.845743 82739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker} I0308 09:26:10.932467 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/certs/ca.pem -> /etc/docker/ca.pem I0308 09:26:10.932522 82739 ssh_runner.go:316] scp /home/filbot/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0308 09:26:10.957456 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/machines/server.pem -> /etc/docker/server.pem I0308 09:26:10.957505 82739 ssh_runner.go:316] scp /home/filbot/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes) I0308 09:26:10.980419 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem I0308 09:26:10.980475 82739 ssh_runner.go:316] scp /home/filbot/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0308 09:26:11.003195 82739 provision.go:86] duration metric: configureAuth took 418.125569ms I0308 09:26:11.003219 82739 ubuntu.go:193] setting minikube options for container-runtime I0308 09:26:11.003442 82739 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0308 09:26:11.048752 82739 main.go:121] libmachine: Using SSH client type: native I0308 09:26:11.048917 82739 main.go:121] libmachine: &{{{ 0 [] [] []} docker [0x7fb7a0] 0x7fb760 [] 0s} 127.0.0.1 49157 } I0308 09:26:11.048935 82739 main.go:121] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0308 09:26:11.205818 82739 main.go:121] libmachine: SSH cmd err, output: : btrfs I0308 09:26:11.205858 82739 ubuntu.go:71] root file system type: btrfs I0308 09:26:11.206367 82739 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ... I0308 09:26:11.206485 82739 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0308 09:26:11.268274 82739 main.go:121] libmachine: Using SSH client type: native I0308 09:26:11.268473 82739 main.go:121] libmachine: &{{{ 0 [] [] []} docker [0x7fb7a0] 0x7fb760 [] 0s} 127.0.0.1 49157 } I0308 09:26:11.268585 82739 main.go:121] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0308 09:26:11.412769 82739 main.go:121] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0308 09:26:11.412869 82739 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0308 09:26:11.455825 82739 main.go:121] libmachine: Using SSH client type: native I0308 09:26:11.456031 82739 main.go:121] libmachine: &{{{ 0 [] [] []} docker [0x7fb7a0] 0x7fb760 [] 0s} 127.0.0.1 49157 } I0308 09:26:11.456057 82739 main.go:121] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0308 09:26:13.044315 82739 main.go:121] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-01-29 14:31:32.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2021-03-08 15:26:11.410039354 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0308 09:26:13.044361 82739 machine.go:91] provisioned docker machine in 2.834862035s I0308 09:26:13.044376 82739 client.go:171] LocalClient.Create took 6.886905677s I0308 09:26:13.044400 82739 start.go:168] duration metric: libmachine.API.Create for "minikube" took 6.886959537s I0308 09:26:13.044417 82739 start.go:267] post-start starting for "minikube" (driver="docker") I0308 09:26:13.044427 82739 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0308 09:26:13.044499 82739 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0308 09:26:13.044690 82739 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0308 09:26:13.094574 82739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker} I0308 09:26:13.191762 82739 ssh_runner.go:149] Run: cat /etc/os-release I0308 09:26:13.198859 82739 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0308 09:26:13.198913 82739 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0308 09:26:13.198942 82739 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0308 09:26:13.198957 82739 info.go:137] Remote host: Ubuntu 20.04.1 LTS I0308 09:26:13.198975 82739 filesync.go:118] Scanning /home/filbot/.minikube/addons for local assets ... I0308 09:26:13.199073 82739 filesync.go:118] Scanning /home/filbot/.minikube/files for local assets ... I0308 09:26:13.199168 82739 start.go:270] post-start completed in 154.737559ms I0308 09:26:13.199938 82739 cli_runner.go:115] Run: docker container inspect -f "" minikube I0308 09:26:13.263504 82739 profile.go:148] Saving config to /home/filbot/.minikube/profiles/minikube/config.json ... I0308 09:26:13.263934 82739 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0308 09:26:13.263975 82739 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0308 09:26:13.306251 82739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker} I0308 09:26:13.395199 82739 start.go:129] duration metric: createHost completed in 7.255915912s I0308 09:26:13.395237 82739 start.go:80] releasing machines lock for "minikube", held for 7.256063683s I0308 09:26:13.395398 82739 cli_runner.go:115] Run: docker container inspect -f "" minikube I0308 09:26:13.453648 82739 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0308 09:26:13.453649 82739 ssh_runner.go:149] Run: systemctl --version I0308 09:26:13.453716 82739 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0308 09:26:13.453725 82739 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0308 09:26:13.497946 82739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker} I0308 09:26:13.498082 82739 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49157 SSHKeyPath:/home/filbot/.minikube/machines/minikube/id_rsa Username:docker} I0308 09:26:13.833441 82739 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd I0308 09:26:13.847385 82739 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0308 09:26:13.860227 82739 cruntime.go:206] skipping containerd shutdown because we are bound to it I0308 09:26:13.860281 82739 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0308 09:26:13.876885 82739 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0308 09:26:13.899021 82739 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0308 09:26:13.914835 82739 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0308 09:26:14.025904 82739 ssh_runner.go:149] Run: sudo systemctl start docker I0308 09:26:14.044447 82739 ssh_runner.go:149] Run: docker version --format I0308 09:26:14.250330 82739 out.go:150] * Preparing Kubernetes v1.20.2 on Docker 20.10.3 ... I0308 09:26:14.250461 82739 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0308 09:26:14.325968 82739 ssh_runner.go:149] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0308 09:26:14.329343 82739 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I0308 09:26:14.338011 82739 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker I0308 09:26:14.338069 82739 ssh_runner.go:149] Run: docker images --format : I0308 09:26:14.378384 82739 docker.go:423] Got preloaded images: I0308 09:26:14.378402 82739 docker.go:429] k8s.gcr.io/kube-proxy:v1.20.2 wasn't preloaded I0308 09:26:14.378408 82739 cache_images.go:76] LoadImages start: [k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/coredns:1.7.0 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/pause:3.2 gcr.io/k8s-minikube/storage-provisioner:v4 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4] I0308 09:26:14.380160 82739 image.go:168] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v4 I0308 09:26:14.380160 82739 image.go:168] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.2 I0308 09:26:14.380186 82739 image.go:168] retrieving image: k8s.gcr.io/coredns:1.7.0 I0308 09:26:14.380231 82739 image.go:168] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0 I0308 09:26:14.380260 82739 image.go:168] retrieving image: k8s.gcr.io/pause:3.2 I0308 09:26:14.380276 82739 image.go:168] retrieving image: k8s.gcr.io/etcd:3.4.13-0 I0308 09:26:14.380347 82739 image.go:168] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.2 I0308 09:26:14.380358 82739 image.go:168] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4 I0308 09:26:14.380369 82739 image.go:168] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.2 I0308 09:26:14.380469 82739 image.go:168] retrieving image: k8s.gcr.io/kube-proxy:v1.20.2 I0308 09:26:14.380667 82739 image.go:176] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v4: Error response from daemon: reference does not exist I0308 09:26:14.380745 82739 image.go:176] daemon lookup for docker.io/kubernetesui/dashboard:v2.1.0: Error response from daemon: reference does not exist I0308 09:26:14.380769 82739 image.go:176] daemon lookup for k8s.gcr.io/coredns:1.7.0: Error response from daemon: reference does not exist I0308 09:26:14.380797 82739 image.go:176] daemon lookup for k8s.gcr.io/pause:3.2: Error response from daemon: reference does not exist I0308 09:26:14.380858 82739 image.go:176] daemon lookup for k8s.gcr.io/etcd:3.4.13-0: Error response from daemon: reference does not exist I0308 09:26:14.380934 82739 image.go:176] daemon lookup for docker.io/kubernetesui/metrics-scraper:v1.0.4: Error response from daemon: reference does not exist I0308 09:26:14.380938 82739 image.go:176] daemon lookup for k8s.gcr.io/kube-apiserver:v1.20.2: Error response from daemon: reference does not exist I0308 09:26:14.380964 82739 image.go:176] daemon lookup for k8s.gcr.io/kube-scheduler:v1.20.2: Error response from daemon: reference does not exist I0308 09:26:14.381007 82739 image.go:176] daemon lookup for k8s.gcr.io/kube-proxy:v1.20.2: Error response from daemon: reference does not exist I0308 09:26:14.381081 82739 image.go:176] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.20.2: Error response from daemon: reference does not exist I0308 09:26:14.702581 82739 ssh_runner.go:149] Run: docker image inspect --format k8s.gcr.io/kube-controller-manager:v1.20.2 I0308 09:26:14.726709 82739 ssh_runner.go:149] Run: docker image inspect --format k8s.gcr.io/kube-apiserver:v1.20.2 I0308 09:26:14.727191 82739 ssh_runner.go:149] Run: docker image inspect --format k8s.gcr.io/pause:3.2 I0308 09:26:14.748195 82739 ssh_runner.go:149] Run: docker image inspect --format k8s.gcr.io/etcd:3.4.13-0 I0308 09:26:14.748322 82739 ssh_runner.go:149] Run: docker image inspect --format k8s.gcr.io/coredns:1.7.0 I0308 09:26:14.769951 82739 cache_images.go:104] "k8s.gcr.io/kube-controller-manager:v1.20.2" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.20.2" does not exist at hash "a27166429d98e07152ca71420931142127609f715925b1607acee6ea6f0e3696" in container runtime I0308 09:26:14.769977 82739 cache_images.go:237] Loading image from cache: /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.2 I0308 09:26:14.770000 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.2 -> /var/lib/minikube/images/kube-controller-manager_v1.20.2 I0308 09:26:14.770065 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.20.2 I0308 09:26:14.778013 82739 ssh_runner.go:149] Run: docker image inspect --format k8s.gcr.io/kube-scheduler:v1.20.2 I0308 09:26:14.778013 82739 ssh_runner.go:149] Run: docker image inspect --format k8s.gcr.io/kube-proxy:v1.20.2 I0308 09:26:14.781816 82739 cache_images.go:104] "k8s.gcr.io/kube-apiserver:v1.20.2" needs transfer: "k8s.gcr.io/kube-apiserver:v1.20.2" does not exist at hash "a8c2fdb8bf76e3b014d14ce69a6a2d11044cb13b4ec3185015c582b8ad69a820" in container runtime I0308 09:26:14.781845 82739 cache_images.go:237] Loading image from cache: /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.2 I0308 09:26:14.781846 82739 cache_images.go:104] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime I0308 09:26:14.781860 82739 cache_images.go:237] Loading image from cache: /home/filbot/.minikube/cache/images/k8s.gcr.io/pause_3.2 I0308 09:26:14.781871 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.2 -> /var/lib/minikube/images/kube-apiserver_v1.20.2 I0308 09:26:14.781876 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/images/k8s.gcr.io/pause_3.2 -> /var/lib/minikube/images/pause_3.2 I0308 09:26:14.781946 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.2 I0308 09:26:14.781950 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.20.2 I0308 09:26:14.785624 82739 ssh_runner.go:149] Run: docker image inspect --format gcr.io/k8s-minikube/storage-provisioner:v4 I0308 09:26:14.797066 82739 cache_images.go:104] "k8s.gcr.io/coredns:1.7.0" needs transfer: "k8s.gcr.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime I0308 09:26:14.797089 82739 cache_images.go:237] Loading image from cache: /home/filbot/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 I0308 09:26:14.797109 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 -> /var/lib/minikube/images/coredns_1.7.0 I0308 09:26:14.797117 82739 cache_images.go:104] "k8s.gcr.io/etcd:3.4.13-0" needs transfer: "k8s.gcr.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime I0308 09:26:14.797142 82739 cache_images.go:237] Loading image from cache: /home/filbot/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 I0308 09:26:14.797158 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 -> /var/lib/minikube/images/etcd_3.4.13-0 I0308 09:26:14.797196 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.7.0 I0308 09:26:14.797209 82739 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.20.2: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.20.2: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.20.2': No such file or directory I0308 09:26:14.797222 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.13-0 I0308 09:26:14.797232 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.2 --> /var/lib/minikube/images/kube-controller-manager_v1.20.2 (29365248 bytes) I0308 09:26:14.827211 82739 cache_images.go:104] "k8s.gcr.io/kube-scheduler:v1.20.2" needs transfer: "k8s.gcr.io/kube-scheduler:v1.20.2" does not exist at hash "ed2c44fbdd78b69a0981ab3c57ebce2798e4a4b2b5dda2fabc720f9957d4869f" in container runtime I0308 09:26:14.827242 82739 cache_images.go:237] Loading image from cache: /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.2 I0308 09:26:14.827271 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.2 -> /var/lib/minikube/images/kube-scheduler_v1.20.2 I0308 09:26:14.827372 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.20.2 I0308 09:26:14.832716 82739 cache_images.go:104] "k8s.gcr.io/kube-proxy:v1.20.2" needs transfer: "k8s.gcr.io/kube-proxy:v1.20.2" does not exist at hash "43154ddb57a83de3068fe603e9c7393e7d2b77cb18d9e0daf869f74b1b4079c0" in container runtime I0308 09:26:14.832750 82739 cache_images.go:237] Loading image from cache: /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.2 I0308 09:26:14.832780 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.2 -> /var/lib/minikube/images/kube-proxy_v1.20.2 I0308 09:26:14.832791 82739 ssh_runner.go:306] existence check for /var/lib/minikube/images/pause_3.2: stat -c "%s %y" /var/lib/minikube/images/pause_3.2: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/pause_3.2': No such file or directory I0308 09:26:14.832811 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/images/k8s.gcr.io/pause_3.2 --> /var/lib/minikube/images/pause_3.2 (301056 bytes) I0308 09:26:14.832832 82739 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.20.2: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.20.2: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.20.2': No such file or directory I0308 09:26:14.832861 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.2 --> /var/lib/minikube/images/kube-apiserver_v1.20.2 (30414336 bytes) I0308 09:26:14.832871 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.20.2 I0308 09:26:14.839678 82739 cache_images.go:104] "gcr.io/k8s-minikube/storage-provisioner:v4" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v4" does not exist at hash "85069258b98ac4e9f9fbd51dfba3b4212d8cd1d79df7d2ecff44b1319ed641cb" in container runtime I0308 09:26:14.839706 82739 cache_images.go:237] Loading image from cache: /home/filbot/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 I0308 09:26:14.839719 82739 ssh_runner.go:306] existence check for /var/lib/minikube/images/etcd_3.4.13-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.13-0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/etcd_3.4.13-0': No such file or directory I0308 09:26:14.839730 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 -> /var/lib/minikube/images/storage-provisioner_v4 I0308 09:26:14.839744 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 --> /var/lib/minikube/images/etcd_3.4.13-0 (86745600 bytes) I0308 09:26:14.839802 82739 ssh_runner.go:306] existence check for /var/lib/minikube/images/coredns_1.7.0: stat -c "%s %y" /var/lib/minikube/images/coredns_1.7.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/coredns_1.7.0': No such file or directory I0308 09:26:14.839812 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v4 I0308 09:26:14.839829 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 --> /var/lib/minikube/images/coredns_1.7.0 (13984256 bytes) I0308 09:26:14.839870 82739 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.20.2: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.20.2: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.20.2': No such file or directory I0308 09:26:14.839893 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.2 --> /var/lib/minikube/images/kube-scheduler_v1.20.2 (14016000 bytes) I0308 09:26:14.840667 82739 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.20.2: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.20.2: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.20.2': No such file or directory I0308 09:26:14.840692 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.2 --> /var/lib/minikube/images/kube-proxy_v1.20.2 (49544704 bytes) I0308 09:26:14.863377 82739 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v4: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v4: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v4': No such file or directory I0308 09:26:14.863482 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 --> /var/lib/minikube/images/storage-provisioner_v4 (8882688 bytes) I0308 09:26:14.865186 82739 docker.go:167] Loading image: /var/lib/minikube/images/pause_3.2 I0308 09:26:14.865238 82739 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/pause_3.2 I0308 09:26:15.118261 82739 cache_images.go:259] Transferred and loaded /home/filbot/.minikube/cache/images/k8s.gcr.io/pause_3.2 from cache I0308 09:26:15.118282 82739 docker.go:167] Loading image: /var/lib/minikube/images/storage-provisioner_v4 I0308 09:26:15.118315 82739 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/storage-provisioner_v4 I0308 09:26:15.210060 82739 ssh_runner.go:149] Run: docker image inspect --format docker.io/kubernetesui/metrics-scraper:v1.0.4 I0308 09:26:15.226783 82739 ssh_runner.go:149] Run: docker image inspect --format docker.io/kubernetesui/dashboard:v2.1.0 I0308 09:26:15.594068 82739 cache_images.go:259] Transferred and loaded /home/filbot/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v4 from cache I0308 09:26:15.594090 82739 docker.go:167] Loading image: /var/lib/minikube/images/kube-scheduler_v1.20.2 I0308 09:26:15.594175 82739 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-scheduler_v1.20.2 I0308 09:26:15.594216 82739 cache_images.go:104] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime I0308 09:26:15.594280 82739 cache_images.go:237] Loading image from cache: /home/filbot/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 I0308 09:26:15.594281 82739 cache_images.go:104] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime I0308 09:26:15.594315 82739 cache_images.go:237] Loading image from cache: /home/filbot/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 I0308 09:26:15.594319 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 -> /var/lib/minikube/images/metrics-scraper_v1.0.4 I0308 09:26:15.594354 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 -> /var/lib/minikube/images/dashboard_v2.1.0 I0308 09:26:15.594399 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4 I0308 09:26:15.594414 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0 I0308 09:26:15.598089 82739 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%s %y" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory I0308 09:26:15.598117 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (16022528 bytes) I0308 09:26:16.459654 82739 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%s %y" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory I0308 09:26:16.459699 82739 cache_images.go:259] Transferred and loaded /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.2 from cache I0308 09:26:16.459708 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (67993600 bytes) I0308 09:26:16.459719 82739 docker.go:167] Loading image: /var/lib/minikube/images/coredns_1.7.0 I0308 09:26:16.459748 82739 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/coredns_1.7.0 I0308 09:26:17.100992 82739 cache_images.go:259] Transferred and loaded /home/filbot/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 from cache I0308 09:26:17.101016 82739 docker.go:167] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.20.2 I0308 09:26:17.101048 82739 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.20.2 I0308 09:26:18.163248 82739 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/kube-controller-manager_v1.20.2: (1.062169086s) I0308 09:26:18.163293 82739 cache_images.go:259] Transferred and loaded /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.2 from cache I0308 09:26:18.163316 82739 docker.go:167] Loading image: /var/lib/minikube/images/kube-apiserver_v1.20.2 I0308 09:26:18.163372 82739 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-apiserver_v1.20.2 I0308 09:26:19.255541 82739 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/kube-apiserver_v1.20.2: (1.092150337s) I0308 09:26:19.255560 82739 cache_images.go:259] Transferred and loaded /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.2 from cache I0308 09:26:19.255571 82739 docker.go:167] Loading image: /var/lib/minikube/images/kube-proxy_v1.20.2 I0308 09:26:19.255598 82739 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/kube-proxy_v1.20.2 I0308 09:26:21.185916 82739 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/kube-proxy_v1.20.2: (1.930300106s) I0308 09:26:21.185936 82739 cache_images.go:259] Transferred and loaded /home/filbot/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.2 from cache I0308 09:26:21.185948 82739 docker.go:167] Loading image: /var/lib/minikube/images/etcd_3.4.13-0 I0308 09:26:21.185975 82739 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/etcd_3.4.13-0 I0308 09:26:24.133285 82739 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/etcd_3.4.13-0: (2.94729547s) I0308 09:26:24.133306 82739 cache_images.go:259] Transferred and loaded /home/filbot/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 from cache I0308 09:26:24.133316 82739 docker.go:167] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4 I0308 09:26:24.133344 82739 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/metrics-scraper_v1.0.4 I0308 09:26:24.791895 82739 cache_images.go:259] Transferred and loaded /home/filbot/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache I0308 09:26:24.791940 82739 docker.go:167] Loading image: /var/lib/minikube/images/dashboard_v2.1.0 I0308 09:26:24.792001 82739 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/dashboard_v2.1.0 I0308 09:26:27.314171 82739 ssh_runner.go:189] Completed: docker load -i /var/lib/minikube/images/dashboard_v2.1.0: (2.52215163s) I0308 09:26:27.314193 82739 cache_images.go:259] Transferred and loaded /home/filbot/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 from cache I0308 09:26:27.314204 82739 cache_images.go:111] Successfully loaded all cached images I0308 09:26:27.314211 82739 cache_images.go:80] LoadImages completed in 12.935790606s I0308 09:26:27.314263 82739 ssh_runner.go:149] Run: docker info --format I0308 09:26:27.608835 82739 cni.go:74] Creating CNI manager for "" I0308 09:26:27.608853 82739 cni.go:140] CNI unnecessary in this configuration, recommending no CNI I0308 09:26:27.608863 82739 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0308 09:26:27.608878 82739 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0308 09:26:27.609054 82739 kubeadm.go:154] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.2 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0308 09:26:27.609191 82739 kubeadm.go:919] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0308 09:26:27.609349 82739 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0308 09:26:27.619693 82739 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.20.2: Process exited with status 2 stdout: stderr: ls: cannot access '/var/lib/minikube/binaries/v1.20.2': No such file or directory Initiating transfer... I0308 09:26:27.619738 82739 ssh_runner.go:149] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.20.2 I0308 09:26:27.628901 82739 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubectl.sha256 I0308 09:26:27.628917 82739 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubelet.sha256 I0308 09:26:27.628919 82739 binary.go:56] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.20.2/bin/linux/amd64/kubeadm.sha256 I0308 09:26:27.628921 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/linux/v1.20.2/kubectl -> /var/lib/minikube/binaries/v1.20.2/kubectl I0308 09:26:27.628942 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/linux/v1.20.2/kubeadm -> /var/lib/minikube/binaries/v1.20.2/kubeadm I0308 09:26:27.628951 82739 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0308 09:26:27.628997 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.2/kubectl I0308 09:26:27.629007 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.2/kubeadm I0308 09:26:27.633178 82739 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.20.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.2/kubectl: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.20.2/kubectl': No such file or directory I0308 09:26:27.633208 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/linux/v1.20.2/kubectl --> /var/lib/minikube/binaries/v1.20.2/kubectl (40230912 bytes) I0308 09:26:27.642371 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/cache/linux/v1.20.2/kubelet -> /var/lib/minikube/binaries/v1.20.2/kubelet I0308 09:26:27.642443 82739 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.2/kubelet I0308 09:26:27.642445 82739 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.20.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.2/kubeadm: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.20.2/kubeadm': No such file or directory I0308 09:26:27.642474 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/linux/v1.20.2/kubeadm --> /var/lib/minikube/binaries/v1.20.2/kubeadm (39219200 bytes) I0308 09:26:27.661996 82739 ssh_runner.go:306] existence check for /var/lib/minikube/binaries/v1.20.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.20.2/kubelet: Process exited with status 1 stdout: stderr: stat: cannot stat '/var/lib/minikube/binaries/v1.20.2/kubelet': No such file or directory I0308 09:26:27.662029 82739 ssh_runner.go:316] scp /home/filbot/.minikube/cache/linux/v1.20.2/kubelet --> /var/lib/minikube/binaries/v1.20.2/kubelet (114015176 bytes) I0308 09:26:28.071613 82739 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0308 09:26:28.078528 82739 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I0308 09:26:28.091684 82739 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0308 09:26:28.106905 82739 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1839 bytes) I0308 09:26:28.123507 82739 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0308 09:26:28.127418 82739 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I0308 09:26:28.139410 82739 certs.go:52] Setting up /home/filbot/.minikube/profiles/minikube for IP: 192.168.49.2 I0308 09:26:28.139472 82739 certs.go:171] skipping minikubeCA CA generation: /home/filbot/.minikube/ca.key I0308 09:26:28.139493 82739 certs.go:171] skipping proxyClientCA CA generation: /home/filbot/.minikube/proxy-client-ca.key I0308 09:26:28.139556 82739 certs.go:279] generating minikube-user signed cert: /home/filbot/.minikube/profiles/minikube/client.key I0308 09:26:28.139567 82739 crypto.go:69] Generating cert /home/filbot/.minikube/profiles/minikube/client.crt with IP's: [] I0308 09:26:28.230386 82739 crypto.go:157] Writing cert to /home/filbot/.minikube/profiles/minikube/client.crt ... I0308 09:26:28.230400 82739 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/client.crt: {Name:mk38f2e53a26a660a8ca42427d273aa5beb3ccab Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0308 09:26:28.230836 82739 crypto.go:165] Writing key to /home/filbot/.minikube/profiles/minikube/client.key ... I0308 09:26:28.230844 82739 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/client.key: {Name:mkdf6546b3ad86d5f6c8bcb5a998110a20e341d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0308 09:26:28.231192 82739 certs.go:279] generating minikube signed cert: /home/filbot/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0308 09:26:28.231200 82739 crypto.go:69] Generating cert /home/filbot/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0308 09:26:28.348209 82739 crypto.go:157] Writing cert to /home/filbot/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0308 09:26:28.348225 82739 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkb9a4037bd1b0390d311aa3c944d4ed10024f42 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0308 09:26:28.348532 82739 crypto.go:165] Writing key to /home/filbot/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0308 09:26:28.348557 82739 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk90feeb569bacbb36c9355394dc89da215946ee Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0308 09:26:28.348765 82739 certs.go:290] copying /home/filbot/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/filbot/.minikube/profiles/minikube/apiserver.crt I0308 09:26:28.348962 82739 certs.go:294] copying /home/filbot/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/filbot/.minikube/profiles/minikube/apiserver.key I0308 09:26:28.349260 82739 certs.go:279] generating aggregator signed cert: /home/filbot/.minikube/profiles/minikube/proxy-client.key I0308 09:26:28.349282 82739 crypto.go:69] Generating cert /home/filbot/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0308 09:26:28.508980 82739 crypto.go:157] Writing cert to /home/filbot/.minikube/profiles/minikube/proxy-client.crt ... I0308 09:26:28.509015 82739 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/proxy-client.crt: {Name:mka6499079057c8e41a72c98ff24037a0c7ae319 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0308 09:26:28.509296 82739 crypto.go:165] Writing key to /home/filbot/.minikube/profiles/minikube/proxy-client.key ... I0308 09:26:28.509307 82739 lock.go:36] WriteFile acquiring /home/filbot/.minikube/profiles/minikube/proxy-client.key: {Name:mk2a89d42af381d951e18297cc690ed8fd50bb29 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0308 09:26:28.509422 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt I0308 09:26:28.509439 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key I0308 09:26:28.509449 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt I0308 09:26:28.509457 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key I0308 09:26:28.509484 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt I0308 09:26:28.509493 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/ca.key -> /var/lib/minikube/certs/ca.key I0308 09:26:28.509530 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt I0308 09:26:28.509537 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key I0308 09:26:28.509639 82739 certs.go:354] found cert: /home/filbot/.minikube/certs/home/filbot/.minikube/certs/ca-key.pem (1679 bytes) I0308 09:26:28.509714 82739 certs.go:354] found cert: /home/filbot/.minikube/certs/home/filbot/.minikube/certs/ca.pem (1078 bytes) I0308 09:26:28.509736 82739 certs.go:354] found cert: /home/filbot/.minikube/certs/home/filbot/.minikube/certs/cert.pem (1123 bytes) I0308 09:26:28.509805 82739 certs.go:354] found cert: /home/filbot/.minikube/certs/home/filbot/.minikube/certs/key.pem (1679 bytes) I0308 09:26:28.509832 82739 vm_assets.go:96] NewFileAsset: /home/filbot/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem I0308 09:26:28.510726 82739 ssh_runner.go:316] scp /home/filbot/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0308 09:26:28.528255 82739 ssh_runner.go:316] scp /home/filbot/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0308 09:26:28.550678 82739 ssh_runner.go:316] scp /home/filbot/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0308 09:26:28.574952 82739 ssh_runner.go:316] scp /home/filbot/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0308 09:26:28.607164 82739 ssh_runner.go:316] scp /home/filbot/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0308 09:26:28.639066 82739 ssh_runner.go:316] scp /home/filbot/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0308 09:26:28.671863 82739 ssh_runner.go:316] scp /home/filbot/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0308 09:26:28.703685 82739 ssh_runner.go:316] scp /home/filbot/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0308 09:26:28.735743 82739 ssh_runner.go:316] scp /home/filbot/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0308 09:26:28.768078 82739 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0308 09:26:28.791398 82739 ssh_runner.go:149] Run: openssl version I0308 09:26:28.803114 82739 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0308 09:26:28.818278 82739 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0308 09:26:28.824217 82739 certs.go:395] hashing: -rw-r--r--. 1 root root 1111 Mar 6 19:50 /usr/share/ca-certificates/minikubeCA.pem I0308 09:26:28.824278 82739 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0308 09:26:28.833849 82739 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0308 09:26:28.847169 82739 kubeadm.go:385] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0308 09:26:28.847318 82739 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format= I0308 09:26:28.892496 82739 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0308 09:26:28.900453 82739 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0308 09:26:28.908167 82739 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver I0308 09:26:28.908238 82739 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0308 09:26:28.915972 82739 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0308 09:26:28.915999 82739 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0308 09:30:32.214672 82739 out.go:150] - Generating certificates and keys ... I0308 09:30:32.226454 82739 out.go:150] - Booting up control plane ... W0308 09:30:32.229818 82739 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.20.2 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher I0308 09:30:32.229895 82739 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force" I0308 09:30:32.699276 82739 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet I0308 09:30:32.711952 82739 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format= I0308 09:30:32.772039 82739 kubeadm.go:219] ignoring SystemVerification for kubeadm because of docker driver I0308 09:30:32.772093 82739 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0308 09:30:32.780909 82739 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0308 09:30:32.780944 82739 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0308 09:32:29.362116 82739 out.go:150] - Generating certificates and keys ... I0308 09:32:29.374361 82739 out.go:150] - Booting up control plane ... I0308 09:32:29.381288 82739 kubeadm.go:387] StartCluster complete in 6m0.534122937s I0308 09:32:29.381425 82739 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format= I0308 09:32:29.426258 82739 logs.go:255] 0 containers: [] W0308 09:32:29.426274 82739 logs.go:257] No container was found matching "kube-apiserver" I0308 09:32:29.426335 82739 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format= I0308 09:32:29.465503 82739 logs.go:255] 0 containers: [] W0308 09:32:29.465521 82739 logs.go:257] No container was found matching "etcd" I0308 09:32:29.465579 82739 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format= I0308 09:32:29.506958 82739 logs.go:255] 0 containers: [] W0308 09:32:29.506976 82739 logs.go:257] No container was found matching "coredns" I0308 09:32:29.507039 82739 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format= I0308 09:32:29.545693 82739 logs.go:255] 0 containers: [] W0308 09:32:29.545735 82739 logs.go:257] No container was found matching "kube-scheduler" I0308 09:32:29.545776 82739 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format= I0308 09:32:29.584665 82739 logs.go:255] 0 containers: [] W0308 09:32:29.584682 82739 logs.go:257] No container was found matching "kube-proxy" I0308 09:32:29.584746 82739 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format= I0308 09:32:29.624948 82739 logs.go:255] 0 containers: [] W0308 09:32:29.624965 82739 logs.go:257] No container was found matching "kubernetes-dashboard" I0308 09:32:29.625023 82739 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format= I0308 09:32:29.664784 82739 logs.go:255] 0 containers: [] W0308 09:32:29.664804 82739 logs.go:257] No container was found matching "storage-provisioner" I0308 09:32:29.664848 82739 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format= I0308 09:32:29.706616 82739 logs.go:255] 0 containers: [] W0308 09:32:29.706634 82739 logs.go:257] No container was found matching "kube-controller-manager" I0308 09:32:29.706644 82739 logs.go:122] Gathering logs for Docker ... I0308 09:32:29.706654 82739 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0308 09:32:29.724582 82739 logs.go:122] Gathering logs for container status ... I0308 09:32:29.724611 82739 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0308 09:32:31.824910 82739 ssh_runner.go:189] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.100280507s) I0308 09:32:31.825137 82739 logs.go:122] Gathering logs for kubelet ... I0308 09:32:31.825154 82739 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0308 09:32:31.897244 82739 logs.go:122] Gathering logs for dmesg ... I0308 09:32:31.897264 82739 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0308 09:32:31.911794 82739 logs.go:122] Gathering logs for describe nodes ... I0308 09:32:31.911817 82739 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" W0308 09:32:31.999366 82739 logs.go:129] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: The connection to the server localhost:8443 was refused - did you specify the right host or port? output: ** stderr ** The connection to the server localhost:8443 was refused - did you specify the right host or port? ** /stderr ** W0308 09:32:31.999395 82739 out.go:312] Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.20.2 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0308 09:32:31.999512 82739 out.go:191] * W0308 09:32:31.999795 82739 out.go:191] X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.20.2 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0308 09:32:31.999879 82739 out.go:191] * W0308 09:32:31.999940 82739 out.go:191] * minikube is exiting due to an error. If the above message is not useful, open an issue: W0308 09:32:31.999991 82739 out.go:191] - https://github.com/kubernetes/minikube/issues/new/choose I0308 09:32:32.027142 82739 out.go:129] W0308 09:32:32.027426 82739 out.go:191] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.20.2 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk [certs] Using existing apiserver-kubelet-client certificate and key on disk [certs] Using existing front-proxy-ca certificate authority [certs] Using existing front-proxy-client certificate and key on disk [certs] Using existing etcd/ca certificate authority [certs] Using existing etcd/server certificate and key on disk [certs] Using existing etcd/peer certificate and key on disk [certs] Using existing etcd/healthcheck-client certificate and key on disk [certs] Using existing apiserver-etcd-client certificate and key on disk [certs] Using the existing "sa" key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in docker: - 'docker ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'docker logs CONTAINERID' stderr: [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.3. Latest validated version: 19.03 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher W0308 09:32:32.027656 82739 out.go:191] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start W0308 09:32:32.027752 82739 out.go:191] * Related issue: https://github.com/kubernetes/minikube/issues/4172 ➜ ~ minikube stop ✋ Stopping node "minikube" ... 🛑 Powering off "minikube" via SSH ... 🛑 1 nodes stopped. ➜ ~ minikube delete 🔥 Deleting "minikube" in docker ... 🔥 Deleting container "minikube" ... 🔥 Removing /home/filbot/.minikube/machines/minikube ... 💀 Removed all traces of the "minikube" cluster.
FilBot3 commented 3 years ago

I believe also related to: https://github.com/kubernetes/minikube/issues/10737

afbjorklund commented 3 years ago

This doesn't look like Docker at all, more like podman:


➜  ~ docker version
Version:      3.0.1
API Version:  3.0.0
Go Version:   go1.15.8
Built:        Fri Feb 19 10:56:17 2021
OS/Arch:      linux/amd64

Try https://docs.docker.com/engine/install/fedora/

afbjorklund commented 3 years ago

We should probably have some kind of warning issued, when trying to use "podman-docker".

https://github.com/containers/podman/blob/v3.0.1/docker

#!/bin/sh
[ -f /etc/containers/nodocker ] || \
echo "Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg." >&2
exec /usr/bin/podman "$@"

It is not supported in minikube, if you want to use podman you should use --driver=podman

(or alternatively, you need to install Docker [or Moby] if you want to use --driver=docker)

afbjorklund commented 3 years ago

That little "echo" statement actually managed to segfault part of Kubernetes, that didn't expect it.

i.e. without touch /etc/containers/nodocker and before https://github.com/containers/podman/commit/50fea69fbc017fd0da160b083780d933aa5462b5

afbjorklund commented 3 years ago

Looking at the rest of the log, it actually did run Docker: docker version: linux-20.10.5

Problem is ! docker is currently using the btrfs storage driver (see #7923)

FilBot3 commented 3 years ago

Ah, I did have alias docker=podman setup in my rc files. Removed that an I've updated the Gist with the correct Docker version information. You're saying that the issue is the BTRFS that Fedora 33 uses though, correct?

afbjorklund commented 3 years ago

You're saying that the issue is the BTRFS that Fedora 33 uses though, correct?

I think so, we don't have any tests for Fedora... I guess that is part of the problem.

See #3552 and the comments in other issues, such as https://github.com/kubernetes/minikube/issues/10237#issuecomment-765892917

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

FilBot3 commented 2 years ago

On Fedora 34 now, and will try with latest Podman. Hopefully with the Docker changes, this will provide more urgency to get Podman compatibility going.