kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.23k stars 4.87k forks source link

how to change log-driver for minikbue's inner docker? #10751

Closed vineetbhardwaj80 closed 3 years ago

vineetbhardwaj80 commented 3 years ago

Steps to reproduce the issue:

  1. Start minkube using command - minikube` start --docker-env log-driver=fluentd --alsologtostderr
  2. Login to minikube VM using command - minikube ssh
  3. Run command - docker info

Full output of failed command:

docker@minikube:~$ docker info Client: Debug Mode: false

Server: Containers: 27 Running: 14 Paused: 0 Stopped: 13 Images: 10 Server Version: 19.03.13 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 8fba4e9a7d01810a393d5d25a3621dc101981175 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 5.4.0-48-generic Operating System: Ubuntu 20.04.1 LTS (containerized) OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.665GiB Name: minikube ID: 63AM:Z53V:4ZSD:3TMN:NVW7:P34T:XJUL:Q6RE:YNO4:ZCRC:AHWL:2CYK Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: provider=docker Experimental: false Insecure Registries: 10.96.0.0/12 127.0.0.0/8 Live Restore Enabled: false

WARNING: No swap limit support

Full output of minikube start command used, if not already included:

ineet@vineet-laptop:~$ minikube start --docker-env log-driver=fluentd --alsologtostderr
I0308 19:13:19.630051   25344 out.go:185] Setting OutFile to fd 1 ...
I0308 19:13:19.630636   25344 out.go:237] isatty.IsTerminal(1) = true
I0308 19:13:19.630660   25344 out.go:198] Setting ErrFile to fd 2...
I0308 19:13:19.630695   25344 out.go:237] isatty.IsTerminal(2) = true
I0308 19:13:19.630895   25344 root.go:279] Updating PATH: /home/vineet/.minikube/bin
W0308 19:13:19.631125   25344 root.go:254] Error reading config file at /home/vineet/.minikube/config/config.json: open /home/vineet/.minikube/config/config.json: no such file or directory
I0308 19:13:19.631580   25344 out.go:192] Setting JSON to false
I0308 19:13:19.654474   25344 start.go:103] hostinfo: {"hostname":"vineet-laptop","uptime":2446,"bootTime":1615208553,"procs":350,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-48-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"f0f9963a-7ca1-4389-bb94-cb9964c37709"}
I0308 19:13:19.656385   25344 start.go:113] virtualization: kvm host
I0308 19:13:19.660221   25344 out.go:110] πŸ˜„  minikube v1.15.1 on Ubuntu 20.04
πŸ˜„  minikube v1.15.1 on Ubuntu 20.04
I0308 19:13:19.661463   25344 driver.go:302] Setting default libvirt URI to qemu:///system
I0308 19:13:19.757466   25344 docker.go:117] docker version: linux-19.03.12
I0308 19:13:19.757656   25344 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0308 19:13:19.914151   25344 info.go:253] docker info: {ID:QV3K:7GW4:7RTJ:UL6X:SAHK:7GYB:RFXL:EPDK:T23U:PJXO:6GLN:RJEQ Containers:32 ContainersRunning:0 ContainersPaused:0 ContainersStopped:32 Images:38 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2021-03-08 19:13:19.829634328 +0530 IST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-48-generic OperatingSystem:Ubuntu 20.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8229982208 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:vineet-laptop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0308 19:13:19.914432   25344 docker.go:147] overlay module found
I0308 19:13:19.919149   25344 out.go:110] ✨  Using the docker driver based on existing profile
✨  Using the docker driver based on existing profile
I0308 19:13:19.919211   25344 start.go:272] selected driver: docker
I0308 19:13:19.919266   25344 start.go:686] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]}
I0308 19:13:19.919583   25344 start.go:697] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I0308 19:13:19.919823   25344 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0308 19:13:20.083616   25344 info.go:253] docker info: {ID:QV3K:7GW4:7RTJ:UL6X:SAHK:7GYB:RFXL:EPDK:T23U:PJXO:6GLN:RJEQ Containers:32 ContainersRunning:0 ContainersPaused:0 ContainersStopped:32 Images:38 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2021-03-08 19:13:19.989474566 +0530 IST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-48-generic OperatingSystem:Ubuntu 20.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8229982208 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:vineet-laptop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0308 19:13:20.085889   25344 start_flags.go:364] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]}
I0308 19:13:20.088974   25344 out.go:110] πŸ‘  Starting control plane node minikube in cluster minikube
πŸ‘  Starting control plane node minikube in cluster minikube
I0308 19:13:20.191195   25344 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e in local docker daemon, skipping pull
I0308 19:13:20.191292   25344 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e exists in daemon, skipping pull
I0308 19:13:20.191347   25344 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker
I0308 19:13:20.191439   25344 preload.go:105] Found local preload: /home/vineet/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4
I0308 19:13:20.191463   25344 cache.go:54] Caching tarball of preloaded images
I0308 19:13:20.191520   25344 preload.go:131] Found /home/vineet/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0308 19:13:20.191544   25344 cache.go:57] Finished verifying existence of preloaded tar for  v1.19.4 on docker
I0308 19:13:20.191932   25344 profile.go:150] Saving config to /home/vineet/.minikube/profiles/minikube/config.json ...
I0308 19:13:20.192401   25344 cache.go:184] Successfully downloaded all kic artifacts
I0308 19:13:20.192466   25344 start.go:314] acquiring machines lock for minikube: {Name:mke3cc14a5201cd692abcd02aa17909c508fd99f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0308 19:13:20.192949   25344 start.go:318] acquired machines lock for "minikube" in 369.571Β΅s
I0308 19:13:20.193040   25344 start.go:94] Skipping create...Using existing machine configuration
I0308 19:13:20.193070   25344 fix.go:54] fixHost starting: 
I0308 19:13:20.194052   25344 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0308 19:13:20.271142   25344 fix.go:107] recreateIfNeeded on minikube: state=Stopped err=<nil>
W0308 19:13:20.271288   25344 fix.go:133] unexpected machine state, will restart: <nil>
I0308 19:13:20.274218   25344 out.go:110] πŸ”„  Restarting existing docker container for "minikube" ...
πŸ”„  Restarting existing docker container for "minikube" ...
I0308 19:13:20.274471   25344 cli_runner.go:110] Run: docker start minikube
I0308 19:13:21.009835   25344 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0308 19:13:21.092271   25344 kic.go:356] container "minikube" state is running.
I0308 19:13:21.092979   25344 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0308 19:13:21.177077   25344 profile.go:150] Saving config to /home/vineet/.minikube/profiles/minikube/config.json ...
I0308 19:13:21.177546   25344 machine.go:88] provisioning docker machine ...
I0308 19:13:21.177599   25344 ubuntu.go:166] provisioning hostname "minikube"
I0308 19:13:21.177729   25344 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0308 19:13:21.268559   25344 main.go:119] libmachine: Using SSH client type: native
I0308 19:13:21.269198   25344 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x808c20] 0x808be0 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0308 19:13:21.269272   25344 main.go:119] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0308 19:13:21.270258   25344 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33420->127.0.0.1:32775: read: connection reset by peer
I0308 19:13:24.513476   25344 main.go:119] libmachine: SSH cmd err, output: <nil>: minikube

I0308 19:13:24.513630   25344 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0308 19:13:24.596115   25344 main.go:119] libmachine: Using SSH client type: native
I0308 19:13:24.596816   25344 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x808c20] 0x808be0 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0308 19:13:24.596965   25344 main.go:119] libmachine: About to run SSH command:

        if ! grep -xq '.*\sminikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
            else 
                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
            fi
        fi
I0308 19:13:24.788479   25344 main.go:119] libmachine: SSH cmd err, output: <nil>: 
I0308 19:13:24.788601   25344 ubuntu.go:172] set auth options {CertDir:/home/vineet/.minikube CaCertPath:/home/vineet/.minikube/certs/ca.pem CaPrivateKeyPath:/home/vineet/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/vineet/.minikube/machines/server.pem ServerKeyPath:/home/vineet/.minikube/machines/server-key.pem ClientKeyPath:/home/vineet/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/vineet/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/vineet/.minikube}
I0308 19:13:24.788667   25344 ubuntu.go:174] setting up certificates
I0308 19:13:24.788703   25344 provision.go:82] configureAuth start
I0308 19:13:24.788844   25344 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0308 19:13:24.869216   25344 provision.go:131] copyHostCerts
I0308 19:13:24.869456   25344 exec_runner.go:91] found /home/vineet/.minikube/ca.pem, removing ...
I0308 19:13:24.869748   25344 exec_runner.go:98] cp: /home/vineet/.minikube/certs/ca.pem --> /home/vineet/.minikube/ca.pem (1078 bytes)
I0308 19:13:24.870204   25344 exec_runner.go:91] found /home/vineet/.minikube/cert.pem, removing ...
I0308 19:13:24.870405   25344 exec_runner.go:98] cp: /home/vineet/.minikube/certs/cert.pem --> /home/vineet/.minikube/cert.pem (1123 bytes)
I0308 19:13:24.870747   25344 exec_runner.go:91] found /home/vineet/.minikube/key.pem, removing ...
I0308 19:13:24.870885   25344 exec_runner.go:98] cp: /home/vineet/.minikube/certs/key.pem --> /home/vineet/.minikube/key.pem (1679 bytes)
I0308 19:13:24.871131   25344 provision.go:105] generating server cert: /home/vineet/.minikube/machines/server.pem ca-key=/home/vineet/.minikube/certs/ca.pem private-key=/home/vineet/.minikube/certs/ca-key.pem org=vineet.minikube san=[192.168.49.2 localhost 127.0.0.1 minikube minikube]
I0308 19:13:25.333295   25344 provision.go:159] copyRemoteCerts
I0308 19:13:25.333394   25344 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0308 19:13:25.333488   25344 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0308 19:13:25.412841   25344 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0308 19:13:25.559967   25344 ssh_runner.go:215] scp /home/vineet/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0308 19:13:25.603995   25344 ssh_runner.go:215] scp /home/vineet/.minikube/machines/server.pem --> /etc/docker/server.pem (1192 bytes)
I0308 19:13:25.651936   25344 ssh_runner.go:215] scp /home/vineet/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0308 19:13:25.704013   25344 provision.go:85] duration metric: configureAuth took 915.249275ms
I0308 19:13:25.704084   25344 ubuntu.go:190] setting minikube options for container-runtime
I0308 19:13:25.704588   25344 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0308 19:13:25.774363   25344 main.go:119] libmachine: Using SSH client type: native
I0308 19:13:25.774818   25344 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x808c20] 0x808be0 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0308 19:13:25.774888   25344 main.go:119] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0308 19:13:25.972638   25344 main.go:119] libmachine: SSH cmd err, output: <nil>: overlay

I0308 19:13:25.972744   25344 ubuntu.go:71] root file system type: overlay
I0308 19:13:25.973427   25344 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0308 19:13:25.973613   25344 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0308 19:13:26.053122   25344 main.go:119] libmachine: Using SSH client type: native
I0308 19:13:26.053706   25344 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x808c20] 0x808be0 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0308 19:13:26.054098   25344 main.go:119] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

Environment="log-driver=fluentd"

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0308 19:13:26.277458   25344 main.go:119] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

Environment=log-driver=fluentd

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0308 19:13:26.277876   25344 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0308 19:13:26.359903   25344 main.go:119] libmachine: Using SSH client type: native
I0308 19:13:26.360433   25344 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x808c20] 0x808be0 <nil>  [] 0s} 127.0.0.1 32775 <nil> <nil>}
I0308 19:13:26.360534   25344 main.go:119] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0308 19:13:26.568042   25344 main.go:119] libmachine: SSH cmd err, output: <nil>: 
I0308 19:13:26.568146   25344 machine.go:91] provisioned docker machine in 5.39056512s
I0308 19:13:26.568186   25344 start.go:268] post-start starting for "minikube" (driver="docker")
I0308 19:13:26.568218   25344 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0308 19:13:26.568401   25344 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0308 19:13:26.568536   25344 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0308 19:13:26.641550   25344 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0308 19:13:26.771845   25344 ssh_runner.go:148] Run: cat /etc/os-release
I0308 19:13:26.780057   25344 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0308 19:13:26.780214   25344 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0308 19:13:26.780276   25344 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0308 19:13:26.780336   25344 info.go:97] Remote host: Ubuntu 20.04.1 LTS
I0308 19:13:26.780373   25344 filesync.go:118] Scanning /home/vineet/.minikube/addons for local assets ...
I0308 19:13:26.780579   25344 filesync.go:118] Scanning /home/vineet/.minikube/files for local assets ...
I0308 19:13:26.780721   25344 start.go:271] post-start completed in 212.487671ms
I0308 19:13:26.780868   25344 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0308 19:13:26.781019   25344 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0308 19:13:26.858304   25344 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0308 19:13:26.986057   25344 fix.go:56] fixHost completed within 6.792981518s
I0308 19:13:26.986118   25344 start.go:81] releasing machines lock for "minikube", held for 6.793113065s
I0308 19:13:26.986285   25344 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0308 19:13:27.060882   25344 ssh_runner.go:148] Run: systemctl --version
I0308 19:13:27.060912   25344 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0308 19:13:27.061020   25344 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0308 19:13:27.061061   25344 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0308 19:13:27.142043   25344 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0308 19:13:27.142513   25344 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0308 19:13:27.656331   25344 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I0308 19:13:27.683225   25344 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0308 19:13:27.705022   25344 cruntime.go:193] skipping containerd shutdown because we are bound to it
I0308 19:13:27.705219   25344 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0308 19:13:27.729132   25344 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0308 19:13:27.759131   25344 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0308 19:13:27.923702   25344 ssh_runner.go:148] Run: sudo systemctl start docker
I0308 19:13:27.945821   25344 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
I0308 19:13:28.067150   25344 out.go:110] 🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
I0308 19:13:28.068471   25344 out.go:110]     β–ͺ env log-driver=fluentd
    β–ͺ env log-driver=fluentd
I0308 19:13:28.068624   25344 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}"
I0308 19:13:28.149584   25344 ssh_runner.go:148] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0308 19:13:28.157658   25344 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.49.1    host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0308 19:13:28.183601   25344 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker
I0308 19:13:28.183756   25344 preload.go:105] Found local preload: /home/vineet/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4
I0308 19:13:28.183997   25344 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0308 19:13:28.260643   25344 docker.go:382] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.19.4
k8s.gcr.io/kube-controller-manager:v1.19.4
k8s.gcr.io/kube-apiserver:v1.19.4
k8s.gcr.io/kube-scheduler:v1.19.4
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0308 19:13:28.260726   25344 docker.go:319] Images already preloaded, skipping extraction
I0308 19:13:28.260846   25344 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0308 19:13:28.337964   25344 docker.go:382] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.19.4
k8s.gcr.io/kube-controller-manager:v1.19.4
k8s.gcr.io/kube-apiserver:v1.19.4
k8s.gcr.io/kube-scheduler:v1.19.4
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0308 19:13:28.338079   25344 cache_images.go:74] Images are preloaded, skipping loading
I0308 19:13:28.338317   25344 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0308 19:13:28.508448   25344 cni.go:74] Creating CNI manager for ""
I0308 19:13:28.508502   25344 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0308 19:13:28.508535   25344 kubeadm.go:84] Using pod CIDR: 
I0308 19:13:28.508572   25344 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.19.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0308 19:13:28.508910   25344 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.19.4
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 192.168.49.2:10249

I0308 19:13:28.509290   25344 kubeadm.go:822] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.19.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
 config:
{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0308 19:13:28.509487   25344 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.19.4
I0308 19:13:28.528721   25344 binaries.go:44] Found k8s binaries, skipping transfer
I0308 19:13:28.528934   25344 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0308 19:13:28.549706   25344 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0308 19:13:28.587385   25344 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0308 19:13:28.619030   25344 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1787 bytes)
I0308 19:13:28.652919   25344 ssh_runner.go:148] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0308 19:13:28.661314   25344 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2   control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0308 19:13:28.689272   25344 certs.go:52] Setting up /home/vineet/.minikube/profiles/minikube for IP: 192.168.49.2
I0308 19:13:28.689382   25344 certs.go:169] skipping minikubeCA CA generation: /home/vineet/.minikube/ca.key
I0308 19:13:28.689446   25344 certs.go:169] skipping proxyClientCA CA generation: /home/vineet/.minikube/proxy-client-ca.key
I0308 19:13:28.689572   25344 certs.go:269] skipping minikube-user signed cert generation: /home/vineet/.minikube/profiles/minikube/client.key
I0308 19:13:28.689629   25344 certs.go:269] skipping minikube signed cert generation: /home/vineet/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0308 19:13:28.689692   25344 certs.go:269] skipping aggregator signed cert generation: /home/vineet/.minikube/profiles/minikube/proxy-client.key
I0308 19:13:28.689911   25344 certs.go:348] found cert: /home/vineet/.minikube/certs/home/vineet/.minikube/certs/ca-key.pem (1675 bytes)
I0308 19:13:28.690004   25344 certs.go:348] found cert: /home/vineet/.minikube/certs/home/vineet/.minikube/certs/ca.pem (1078 bytes)
I0308 19:13:28.690088   25344 certs.go:348] found cert: /home/vineet/.minikube/certs/home/vineet/.minikube/certs/cert.pem (1123 bytes)
I0308 19:13:28.690169   25344 certs.go:348] found cert: /home/vineet/.minikube/certs/home/vineet/.minikube/certs/key.pem (1679 bytes)
I0308 19:13:28.691635   25344 ssh_runner.go:215] scp /home/vineet/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0308 19:13:28.737094   25344 ssh_runner.go:215] scp /home/vineet/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0308 19:13:28.779243   25344 ssh_runner.go:215] scp /home/vineet/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0308 19:13:28.821317   25344 ssh_runner.go:215] scp /home/vineet/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0308 19:13:28.866584   25344 ssh_runner.go:215] scp /home/vineet/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0308 19:13:28.914826   25344 ssh_runner.go:215] scp /home/vineet/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0308 19:13:28.960711   25344 ssh_runner.go:215] scp /home/vineet/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0308 19:13:29.004375   25344 ssh_runner.go:215] scp /home/vineet/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0308 19:13:29.048519   25344 ssh_runner.go:215] scp /home/vineet/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0308 19:13:29.093580   25344 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0308 19:13:29.129811   25344 ssh_runner.go:148] Run: openssl version
I0308 19:13:29.139234   25344 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0308 19:13:29.159770   25344 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0308 19:13:29.167291   25344 certs.go:389] hashing: -rw-r--r-- 1 root root 1111 Dec  4 10:57 /usr/share/ca-certificates/minikubeCA.pem
I0308 19:13:29.167460   25344 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0308 19:13:29.177672   25344 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0308 19:13:29.192762   25344 kubeadm.go:324] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]}
I0308 19:13:29.193272   25344 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0308 19:13:29.271116   25344 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0308 19:13:29.283550   25344 kubeadm.go:335] found existing configuration files, will attempt cluster restart
I0308 19:13:29.283624   25344 kubeadm.go:527] restartCluster start
I0308 19:13:29.283776   25344 ssh_runner.go:148] Run: sudo test -d /data/minikube
I0308 19:13:29.297905   25344 kubeadm.go:122] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:

stderr:
I0308 19:13:29.301901   25344 kubeconfig.go:117] verify returned: extract IP: "minikube" does not appear in /home/vineet/.kube/config
I0308 19:13:29.302937   25344 kubeconfig.go:128] "minikube" context is missing from /home/vineet/.kube/config - will repair!
I0308 19:13:29.303645   25344 lock.go:36] WriteFile acquiring /home/vineet/.kube/config: {Name:mked609b8334433c2f320799e75acf3d151a607f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0308 19:13:29.309233   25344 ssh_runner.go:148] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0308 19:13:29.322833   25344 api_server.go:146] Checking apiserver status ...
I0308 19:13:29.323004   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0308 19:13:29.354961   25344 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

stderr:
I0308 19:13:29.355005   25344 kubeadm.go:506] needs reconfigure: apiserver in state Stopped
I0308 19:13:29.355026   25344 kubeadm.go:945] stopping kube-system containers ...
I0308 19:13:29.355114   25344 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0308 19:13:29.429569   25344 docker.go:229] Stopping containers: [b6905a71a123 5c73a0ce5a17 2a3b0c389b58 709d37b2f079 b1b1bd5d84b4 114bdae2eb9a b48a10c75801 99bc42a32b0c c9074a52fe0f 6f80b4ea4d21 edbb7b1dee41 1caa1c797b78 931dda362eda edbccb75a7f9 53a3785fad51 8a76517a5d17 575d296c844a 0306364caba2 76d7d45d67f1 b71b44db3d81 b94d28740c37 c425c2f00ff5 5659b1019995 bd482afc058b 81a14d529da3 ee611cdeae27 675de39192f4]
I0308 19:13:29.429723   25344 ssh_runner.go:148] Run: docker stop b6905a71a123 5c73a0ce5a17 2a3b0c389b58 709d37b2f079 b1b1bd5d84b4 114bdae2eb9a b48a10c75801 99bc42a32b0c c9074a52fe0f 6f80b4ea4d21 edbb7b1dee41 1caa1c797b78 931dda362eda edbccb75a7f9 53a3785fad51 8a76517a5d17 575d296c844a 0306364caba2 76d7d45d67f1 b71b44db3d81 b94d28740c37 c425c2f00ff5 5659b1019995 bd482afc058b 81a14d529da3 ee611cdeae27 675de39192f4
I0308 19:13:29.522788   25344 ssh_runner.go:148] Run: sudo systemctl stop kubelet
I0308 19:13:29.544896   25344 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0308 19:13:29.561313   25344 kubeadm.go:150] found existing configuration files:
-rw------- 1 root root 5615 Dec  4 10:58 /etc/kubernetes/admin.conf
-rw------- 1 root root 5628 Mar  8 13:38 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1971 Dec  4 10:59 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5580 Mar  8 13:38 /etc/kubernetes/scheduler.conf

I0308 19:13:29.561453   25344 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0308 19:13:29.578419   25344 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0308 19:13:29.597589   25344 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0308 19:13:29.611151   25344 kubeadm.go:161] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:

stderr:
I0308 19:13:29.611350   25344 ssh_runner.go:148] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0308 19:13:29.630438   25344 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0308 19:13:29.644666   25344 kubeadm.go:161] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:

stderr:
I0308 19:13:29.644841   25344 ssh_runner.go:148] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0308 19:13:29.661623   25344 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0308 19:13:29.678713   25344 kubeadm.go:603] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0308 19:13:29.678784   25344 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0308 19:13:29.989465   25344 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0308 19:13:32.062611   25344 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.073065336s)
I0308 19:13:32.062709   25344 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0308 19:13:32.622856   25344 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0308 19:13:33.039723   25344 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0308 19:13:33.374063   25344 api_server.go:48] waiting for apiserver process to appear ...
I0308 19:13:33.374164   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:33.907194   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:34.407343   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:34.907224   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:35.407253   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:35.907500   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:36.407362   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:36.907396   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:37.407168   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:37.907234   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:38.407265   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:38.907346   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:39.407038   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:39.907044   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:40.407132   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:40.907090   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:41.407031   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:41.907112   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:42.032607   25344 api_server.go:68] duration metric: took 8.658539901s to wait for apiserver process to appear ...
I0308 19:13:42.032662   25344 api_server.go:84] waiting for apiserver healthz status ...
I0308 19:13:42.032690   25344 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0308 19:13:42.033068   25344 api_server.go:231] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
I0308 19:13:42.533445   25344 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0308 19:13:53.508296   25344 api_server.go:241] https://192.168.49.2:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0308 19:13:53.508392   25344 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0308 19:13:53.533382   25344 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0308 19:13:53.695239   25344 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[+]poststarthook/crd-informer-synced ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0308 19:13:53.695310   25344 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[+]poststarthook/crd-informer-synced ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[-]poststarthook/apiservice-registration-controller failed: reason withheld
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0308 19:13:54.033515   25344 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0308 19:13:54.052380   25344 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0308 19:13:54.052482   25344 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0308 19:13:54.533294   25344 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0308 19:13:54.590144   25344 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0308 19:13:54.590242   25344 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0308 19:13:55.033391   25344 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0308 19:13:55.051352   25344 api_server.go:241] https://192.168.49.2:8443/healthz returned 200:
ok
I0308 19:13:55.076073   25344 api_server.go:137] control plane version: v1.19.4
I0308 19:13:55.076160   25344 api_server.go:127] duration metric: took 13.04347781s to wait for apiserver health ...
I0308 19:13:55.076233   25344 cni.go:74] Creating CNI manager for ""
I0308 19:13:55.076292   25344 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0308 19:13:55.076354   25344 system_pods.go:41] waiting for kube-system pods to appear ...
I0308 19:13:55.107422   25344 system_pods.go:57] 7 kube-system pods found
I0308 19:13:55.107473   25344 system_pods.go:59] "coredns-f9fd979d6-b96tv" [ce39a9cd-a11a-4034-bb11-e1c257970893] Running
I0308 19:13:55.107498   25344 system_pods.go:59] "etcd-minikube" [0157ffa7-f6a3-4970-a274-d8900d83b8d5] Running
I0308 19:13:55.107540   25344 system_pods.go:59] "kube-apiserver-minikube" [3af8a2d0-05a5-4501-bf3c-a281d9158ec9] Running
I0308 19:13:55.107566   25344 system_pods.go:59] "kube-controller-manager-minikube" [0bb2d5b8-7bcd-4507-9e4d-49dbb8353cc2] Running
I0308 19:13:55.107589   25344 system_pods.go:59] "kube-proxy-gtbfh" [c8fbec3c-e66a-4ecb-99de-a8431edd4d1e] Running
I0308 19:13:55.107610   25344 system_pods.go:59] "kube-scheduler-minikube" [74636edb-336c-4eba-89bd-c249c9c75298] Running
I0308 19:13:55.107633   25344 system_pods.go:59] "storage-provisioner" [a073288b-4463-469d-909e-88c6db91c5a6] Running
I0308 19:13:55.107653   25344 system_pods.go:72] duration metric: took 31.277663ms to wait for pod list to return data ...
I0308 19:13:55.107673   25344 node_conditions.go:101] verifying NodePressure condition ...
I0308 19:13:55.117755   25344 node_conditions.go:121] node storage ephemeral capacity is 57410400Ki
I0308 19:13:55.117848   25344 node_conditions.go:122] node cpu capacity is 4
I0308 19:13:55.117918   25344 node_conditions.go:104] duration metric: took 10.213058ms to run NodePressure ...
I0308 19:13:55.117971   25344 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0308 19:13:55.757739   25344 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0308 19:13:55.810754   25344 ops.go:34] apiserver oom_adj: -16
I0308 19:13:55.810810   25344 kubeadm.go:531] restartCluster took 26.527138252s
I0308 19:13:55.810841   25344 kubeadm.go:326] StartCluster complete in 26.618109577s
I0308 19:13:55.810884   25344 settings.go:127] acquiring lock: {Name:mk2f3b028bca7768039589cb41709df8dd202a23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0308 19:13:55.811052   25344 settings.go:135] Updating kubeconfig:  /home/vineet/.kube/config
I0308 19:13:55.812253   25344 lock.go:36] WriteFile acquiring /home/vineet/.kube/config: {Name:mked609b8334433c2f320799e75acf3d151a607f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0308 19:13:55.813172   25344 start.go:198] Will wait 6m0s for node up to 
I0308 19:13:55.816580   25344 out.go:110] πŸ”Ž  Verifying Kubernetes components...
πŸ”Ž  Verifying Kubernetes components...
I0308 19:13:55.813541   25344 addons.go:371] enableAddons start: toEnable=map[ambassador:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
I0308 19:13:55.816716   25344 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
I0308 19:13:55.816739   25344 addons.go:55] Setting storage-provisioner=true in profile "minikube"
I0308 19:13:55.813690   25344 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.4/kubectl scale deployment --replicas=1 coredns -n=kube-system
I0308 19:13:55.816798   25344 addons.go:55] Setting default-storageclass=true in profile "minikube"
I0308 19:13:55.816844   25344 addons.go:274] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0308 19:13:55.816848   25344 addons.go:131] Setting addon storage-provisioner=true in "minikube"
W0308 19:13:55.817301   25344 addons.go:140] addon storage-provisioner should already be in state true
I0308 19:13:55.817340   25344 host.go:66] Checking if "minikube" exists ...
I0308 19:13:55.818251   25344 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0308 19:13:55.818833   25344 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0308 19:13:55.892420   25344 api_server.go:48] waiting for apiserver process to appear ...
I0308 19:13:55.892536   25344 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0308 19:13:56.020291   25344 addons.go:243] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0308 19:13:56.020414   25344 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0308 19:13:56.020533   25344 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0308 19:13:56.074284   25344 addons.go:131] Setting addon default-storageclass=true in "minikube"
W0308 19:13:56.074350   25344 addons.go:140] addon default-storageclass should already be in state true
I0308 19:13:56.074395   25344 host.go:66] Checking if "minikube" exists ...
I0308 19:13:56.075454   25344 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0308 19:13:56.213511   25344 start.go:553] successfully scaled coredns replicas to 1
I0308 19:13:56.214812   25344 api_server.go:68] duration metric: took 401.585065ms to wait for apiserver process to appear ...
I0308 19:13:56.216395   25344 api_server.go:84] waiting for apiserver healthz status ...
I0308 19:13:56.216477   25344 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0308 19:13:56.231525   25344 addons.go:243] installing /etc/kubernetes/addons/storageclass.yaml
I0308 19:13:56.231580   25344 ssh_runner.go:215] scp deploy/addons/storageclass/storageclass.yaml.tmpl --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0308 19:13:56.231680   25344 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0308 19:13:56.240689   25344 api_server.go:241] https://192.168.49.2:8443/healthz returned 200:
ok
I0308 19:13:56.246879   25344 api_server.go:137] control plane version: v1.19.4
I0308 19:13:56.246932   25344 api_server.go:127] duration metric: took 30.46677ms to wait for apiserver health ...
I0308 19:13:56.246965   25344 system_pods.go:41] waiting for kube-system pods to appear ...
I0308 19:13:56.265692   25344 system_pods.go:57] 7 kube-system pods found
I0308 19:13:56.265950   25344 system_pods.go:59] "coredns-f9fd979d6-b96tv" [ce39a9cd-a11a-4034-bb11-e1c257970893] Running
I0308 19:13:56.266134   25344 system_pods.go:59] "etcd-minikube" [0157ffa7-f6a3-4970-a274-d8900d83b8d5] Running
I0308 19:13:56.266302   25344 system_pods.go:59] "kube-apiserver-minikube" [3af8a2d0-05a5-4501-bf3c-a281d9158ec9] Running
I0308 19:13:56.266464   25344 system_pods.go:59] "kube-controller-manager-minikube" [0bb2d5b8-7bcd-4507-9e4d-49dbb8353cc2] Running
I0308 19:13:56.266677   25344 system_pods.go:59] "kube-proxy-gtbfh" [c8fbec3c-e66a-4ecb-99de-a8431edd4d1e] Running
I0308 19:13:56.266834   25344 system_pods.go:59] "kube-scheduler-minikube" [74636edb-336c-4eba-89bd-c249c9c75298] Running
I0308 19:13:56.266987   25344 system_pods.go:59] "storage-provisioner" [a073288b-4463-469d-909e-88c6db91c5a6] Running
I0308 19:13:56.267134   25344 system_pods.go:72] duration metric: took 20.144033ms to wait for pod list to return data ...
I0308 19:13:56.267292   25344 kubeadm.go:474] duration metric: took 454.049099ms to wait for : map[apiserver:true system_pods:true] ...
I0308 19:13:56.267464   25344 node_conditions.go:101] verifying NodePressure condition ...
I0308 19:13:56.285516   25344 node_conditions.go:121] node storage ephemeral capacity is 57410400Ki
I0308 19:13:56.285800   25344 node_conditions.go:122] node cpu capacity is 4
I0308 19:13:56.285880   25344 node_conditions.go:104] duration metric: took 18.280293ms to run NodePressure ...
I0308 19:13:56.285931   25344 start.go:203] waiting for startup goroutines ...
I0308 19:13:56.290895   25344 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0308 19:13:56.392203   25344 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0308 19:13:56.494243   25344 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0308 19:13:56.535588   25344 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0308 19:13:57.548285   25344 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.053976351s)
I0308 19:13:57.548450   25344 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.012818391s)
I0308 19:13:57.554394   25344 out.go:110] 🌟  Enabled addons: storage-provisioner, default-storageclass
🌟  Enabled addons: storage-provisioner, default-storageclass
I0308 19:13:57.554499   25344 addons.go:373] enableAddons completed in 1.740975734s
I0308 19:13:57.672789   25344 start.go:461] kubectl: 1.19.3, cluster: 1.19.4 (minor skew: 0)
I0308 19:13:57.674369   25344 out.go:110] πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Optional: Full output of minikube logs command:

afbjorklund commented 3 years ago

Did you mean to use --docker-opt ?

      --docker-env=[]: Environment variables to pass to the Docker daemon. (format: key=value)
      --docker-opt=[]: Specify arbitrary flags to pass to the Docker daemon. (format: key=value)
vineetbhardwaj80 commented 3 years ago

Appreciate quick response. I had tried with --docker-opt as well, but did not change the behaviour. Logs

vineet@vineet-laptop:~$ _minikube start --docker-opt log-driver=fluentd --alsologtostderr_
I0309 12:26:49.409642   46640 out.go:185] Setting OutFile to fd 1 ...
I0309 12:26:49.410232   46640 out.go:237] isatty.IsTerminal(1) = true
I0309 12:26:49.410265   46640 out.go:198] Setting ErrFile to fd 2...
I0309 12:26:49.410315   46640 out.go:237] isatty.IsTerminal(2) = true
I0309 12:26:49.410580   46640 root.go:279] Updating PATH: /home/vineet/.minikube/bin
W0309 12:26:49.410910   46640 root.go:254] Error reading config file at /home/vineet/.minikube/config/config.json: open /home/vineet/.minikube/config/config.json: no such file or directory
I0309 12:26:49.411611   46640 out.go:192] Setting JSON to false
I0309 12:26:49.435667   46640 start.go:103] hostinfo: {"hostname":"vineet-laptop","uptime":64456,"bootTime":1615208553,"procs":345,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-48-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"f0f9963a-7ca1-4389-bb94-cb9964c37709"}
I0309 12:26:49.437968   46640 start.go:113] virtualization: kvm host
I0309 12:26:49.443181   46640 out.go:110] πŸ˜„  minikube v1.15.1 on Ubuntu 20.04
πŸ˜„  minikube v1.15.1 on Ubuntu 20.04
I0309 12:26:49.444800   46640 driver.go:302] Setting default libvirt URI to qemu:///system
I0309 12:26:49.546795   46640 docker.go:117] docker version: linux-19.03.12
I0309 12:26:49.547074   46640 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0309 12:26:49.713855   46640 info.go:253] docker info: {ID:QV3K:7GW4:7RTJ:UL6X:SAHK:7GYB:RFXL:EPDK:T23U:PJXO:6GLN:RJEQ Containers:32 ContainersRunning:0 ContainersPaused:0 ContainersStopped:32 Images:38 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2021-03-09 12:26:49.617431866 +0530 IST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-48-generic OperatingSystem:Ubuntu 20.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8229982208 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:vineet-laptop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0309 12:26:49.714283   46640 docker.go:147] overlay module found
I0309 12:26:49.716241   46640 out.go:110] ✨  Using the docker driver based on existing profile
✨  Using the docker driver based on existing profile
I0309 12:26:49.716305   46640 start.go:272] selected driver: docker
I0309 12:26:49.716340   46640 start.go:686] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]}
I0309 12:26:49.716640   46640 start.go:697] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I0309 12:26:49.716849   46640 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I0309 12:26:49.887893   46640 info.go:253] docker info: {ID:QV3K:7GW4:7RTJ:UL6X:SAHK:7GYB:RFXL:EPDK:T23U:PJXO:6GLN:RJEQ Containers:32 ContainersRunning:0 ContainersPaused:0 ContainersStopped:32 Images:38 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2021-03-09 12:26:49.794438917 +0530 IST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-48-generic OperatingSystem:Ubuntu 20.04.1 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:8229982208 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:vineet-laptop Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I0309 12:26:49.889389   46640 start_flags.go:364] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]}
I0309 12:26:49.891975   46640 out.go:110] πŸ‘  Starting control plane node minikube in cluster minikube
πŸ‘  Starting control plane node minikube in cluster minikube
I0309 12:26:49.995414   46640 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e in local docker daemon, skipping pull
I0309 12:26:49.995530   46640 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e exists in daemon, skipping pull
I0309 12:26:49.995600   46640 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker
I0309 12:26:49.995810   46640 preload.go:105] Found local preload: /home/vineet/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4
I0309 12:26:49.995907   46640 cache.go:54] Caching tarball of preloaded images
I0309 12:26:49.995958   46640 preload.go:131] Found /home/vineet/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0309 12:26:49.995976   46640 cache.go:57] Finished verifying existence of preloaded tar for  v1.19.4 on docker
I0309 12:26:49.997014   46640 profile.go:150] Saving config to /home/vineet/.minikube/profiles/minikube/config.json ...
I0309 12:26:49.997562   46640 cache.go:184] Successfully downloaded all kic artifacts
I0309 12:26:49.997638   46640 start.go:314] acquiring machines lock for minikube: {Name:mke3cc14a5201cd692abcd02aa17909c508fd99f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0309 12:26:49.998025   46640 start.go:318] acquired machines lock for "minikube" in 304.117Β΅s
I0309 12:26:49.998079   46640 start.go:94] Skipping create...Using existing machine configuration
I0309 12:26:49.998111   46640 fix.go:54] fixHost starting: 
I0309 12:26:49.999167   46640 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0309 12:26:50.081751   46640 fix.go:107] recreateIfNeeded on minikube: state=Stopped err=<nil>
W0309 12:26:50.081864   46640 fix.go:133] unexpected machine state, will restart: <nil>
I0309 12:26:50.085956   46640 out.go:110] πŸ”„  Restarting existing docker container for "minikube" ...
πŸ”„  Restarting existing docker container for "minikube" ...
I0309 12:26:50.086146   46640 cli_runner.go:110] Run: docker start minikube
I0309 12:26:50.817230   46640 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0309 12:26:50.899630   46640 kic.go:356] container "minikube" state is running.
I0309 12:26:50.900324   46640 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0309 12:26:50.984775   46640 profile.go:150] Saving config to /home/vineet/.minikube/profiles/minikube/config.json ...
I0309 12:26:50.985224   46640 machine.go:88] provisioning docker machine ...
I0309 12:26:50.985278   46640 ubuntu.go:166] provisioning hostname "minikube"
I0309 12:26:50.985393   46640 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0309 12:26:51.076629   46640 main.go:119] libmachine: Using SSH client type: native
I0309 12:26:51.077040   46640 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x808c20] 0x808be0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0309 12:26:51.077090   46640 main.go:119] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0309 12:26:51.078557   46640 main.go:119] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60998->127.0.0.1:32779: read: connection reset by peer
I0309 12:26:54.317889   46640 main.go:119] libmachine: SSH cmd err, output: <nil>: minikube

I0309 12:26:54.318080   46640 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0309 12:26:54.398899   46640 main.go:119] libmachine: Using SSH client type: native
I0309 12:26:54.399423   46640 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x808c20] 0x808be0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0309 12:26:54.399510   46640 main.go:119] libmachine: About to run SSH command:

        if ! grep -xq '.*\sminikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
            else 
                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
            fi
        fi
I0309 12:26:54.591171   46640 main.go:119] libmachine: SSH cmd err, output: <nil>: 
I0309 12:26:54.591286   46640 ubuntu.go:172] set auth options {CertDir:/home/vineet/.minikube CaCertPath:/home/vineet/.minikube/certs/ca.pem CaPrivateKeyPath:/home/vineet/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/vineet/.minikube/machines/server.pem ServerKeyPath:/home/vineet/.minikube/machines/server-key.pem ClientKeyPath:/home/vineet/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/vineet/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/vineet/.minikube}
I0309 12:26:54.591401   46640 ubuntu.go:174] setting up certificates
I0309 12:26:54.591436   46640 provision.go:82] configureAuth start
I0309 12:26:54.591660   46640 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0309 12:26:54.664731   46640 provision.go:131] copyHostCerts
I0309 12:26:54.664914   46640 exec_runner.go:91] found /home/vineet/.minikube/ca.pem, removing ...
I0309 12:26:54.665132   46640 exec_runner.go:98] cp: /home/vineet/.minikube/certs/ca.pem --> /home/vineet/.minikube/ca.pem (1078 bytes)
I0309 12:26:54.665405   46640 exec_runner.go:91] found /home/vineet/.minikube/cert.pem, removing ...
I0309 12:26:54.665539   46640 exec_runner.go:98] cp: /home/vineet/.minikube/certs/cert.pem --> /home/vineet/.minikube/cert.pem (1123 bytes)
I0309 12:26:54.665759   46640 exec_runner.go:91] found /home/vineet/.minikube/key.pem, removing ...
I0309 12:26:54.665894   46640 exec_runner.go:98] cp: /home/vineet/.minikube/certs/key.pem --> /home/vineet/.minikube/key.pem (1679 bytes)
I0309 12:26:54.666145   46640 provision.go:105] generating server cert: /home/vineet/.minikube/machines/server.pem ca-key=/home/vineet/.minikube/certs/ca.pem private-key=/home/vineet/.minikube/certs/ca-key.pem org=vineet.minikube san=[192.168.49.2 localhost 127.0.0.1 minikube minikube]
I0309 12:26:54.895633   46640 provision.go:159] copyRemoteCerts
I0309 12:26:54.895742   46640 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0309 12:26:54.895864   46640 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0309 12:26:54.972430   46640 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0309 12:26:55.103468   46640 ssh_runner.go:215] scp /home/vineet/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0309 12:26:55.148363   46640 ssh_runner.go:215] scp /home/vineet/.minikube/machines/server.pem --> /etc/docker/server.pem (1192 bytes)
I0309 12:26:55.194792   46640 ssh_runner.go:215] scp /home/vineet/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0309 12:26:55.241685   46640 provision.go:85] duration metric: configureAuth took 650.199727ms
I0309 12:26:55.241763   46640 ubuntu.go:190] setting minikube options for container-runtime
I0309 12:26:55.242427   46640 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0309 12:26:55.318571   46640 main.go:119] libmachine: Using SSH client type: native
I0309 12:26:55.318999   46640 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x808c20] 0x808be0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0309 12:26:55.319057   46640 main.go:119] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0309 12:26:55.520022   46640 main.go:119] libmachine: SSH cmd err, output: <nil>: overlay

I0309 12:26:55.520088   46640 ubuntu.go:71] root file system type: overlay
I0309 12:26:55.520847   46640 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I0309 12:26:55.521022   46640 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0309 12:26:55.609134   46640 main.go:119] libmachine: Using SSH client type: native
I0309 12:26:55.609794   46640 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x808c20] 0x808be0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0309 12:26:55.610117   46640 main.go:119] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0309 12:26:55.833599   46640 main.go:119] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0309 12:26:55.833823   46640 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0309 12:26:55.909705   46640 main.go:119] libmachine: Using SSH client type: native
I0309 12:26:55.910309   46640 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x808c20] 0x808be0 <nil>  [] 0s} 127.0.0.1 32779 <nil> <nil>}
I0309 12:26:55.910417   46640 main.go:119] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0309 12:26:57.451613   46640 main.go:119] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service   2021-03-08 13:20:17.490291040 +0000
+++ /lib/systemd/system/docker.service.new  2021-03-09 06:56:55.825384324 +0000
@@ -9,7 +9,6 @@
 [Service]
 Type=notify

-Environment=log-driver=fluentd

 # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker

I0309 12:26:57.451696   46640 machine.go:91] provisioned docker machine in 6.46643973s
I0309 12:26:57.451738   46640 start.go:268] post-start starting for "minikube" (driver="docker")
I0309 12:26:57.451771   46640 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0309 12:26:57.451919   46640 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0309 12:26:57.452043   46640 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0309 12:26:57.528515   46640 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0309 12:26:57.664609   46640 ssh_runner.go:148] Run: cat /etc/os-release
I0309 12:26:57.673600   46640 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0309 12:26:57.673710   46640 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0309 12:26:57.673754   46640 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0309 12:26:57.673777   46640 info.go:97] Remote host: Ubuntu 20.04.1 LTS
I0309 12:26:57.673807   46640 filesync.go:118] Scanning /home/vineet/.minikube/addons for local assets ...
I0309 12:26:57.673947   46640 filesync.go:118] Scanning /home/vineet/.minikube/files for local assets ...
I0309 12:26:57.674064   46640 start.go:271] post-start completed in 222.296545ms
I0309 12:26:57.674213   46640 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0309 12:26:57.674331   46640 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0309 12:26:57.763185   46640 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0309 12:26:57.896283   46640 fix.go:56] fixHost completed within 7.898162807s
I0309 12:26:57.896394   46640 start.go:81] releasing machines lock for "minikube", held for 7.898327672s
I0309 12:26:57.896656   46640 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0309 12:26:57.975091   46640 ssh_runner.go:148] Run: systemctl --version
I0309 12:26:57.975130   46640 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I0309 12:26:57.975227   46640 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0309 12:26:57.975368   46640 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0309 12:26:58.059666   46640 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0309 12:26:58.062095   46640 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0309 12:26:58.182051   46640 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I0309 12:27:00.084988   46640 ssh_runner.go:188] Completed: curl -sS -m 2 https://k8s.gcr.io/: (2.109743296s)
I0309 12:27:00.085214   46640 ssh_runner.go:188] Completed: sudo systemctl is-active --quiet service containerd: (1.903096858s)
I0309 12:27:00.085424   46640 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0309 12:27:00.111919   46640 cruntime.go:193] skipping containerd shutdown because we are bound to it
I0309 12:27:00.112053   46640 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I0309 12:27:00.138759   46640 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I0309 12:27:00.163338   46640 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I0309 12:27:00.318813   46640 ssh_runner.go:148] Run: sudo systemctl start docker
I0309 12:27:00.341877   46640 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
I0309 12:27:00.461658   46640 out.go:110] 🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
I0309 12:27:00.463673   46640 out.go:110]     β–ͺ opt log-driver=fluentd
    β–ͺ opt log-driver=fluentd
I0309 12:27:00.463904   46640 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}"
I0309 12:27:00.541602   46640 ssh_runner.go:148] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I0309 12:27:00.550207   46640 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.49.1    host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0309 12:27:00.577037   46640 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker
I0309 12:27:00.577120   46640 preload.go:105] Found local preload: /home/vineet/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4
I0309 12:27:00.577270   46640 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0309 12:27:00.655354   46640 docker.go:382] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.19.4
k8s.gcr.io/kube-controller-manager:v1.19.4
k8s.gcr.io/kube-apiserver:v1.19.4
k8s.gcr.io/kube-scheduler:v1.19.4
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0309 12:27:00.655487   46640 docker.go:319] Images already preloaded, skipping extraction
I0309 12:27:00.655715   46640 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I0309 12:27:00.731158   46640 docker.go:382] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.19.4
k8s.gcr.io/kube-controller-manager:v1.19.4
k8s.gcr.io/kube-apiserver:v1.19.4
k8s.gcr.io/kube-scheduler:v1.19.4
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0309 12:27:00.731264   46640 cache_images.go:74] Images are preloaded, skipping loading
I0309 12:27:00.731448   46640 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I0309 12:27:00.919330   46640 cni.go:74] Creating CNI manager for ""
I0309 12:27:00.919403   46640 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0309 12:27:00.919477   46640 kubeadm.go:84] Using pod CIDR: 
I0309 12:27:00.919520   46640 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.19.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0309 12:27:00.920061   46640 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.19.4
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 192.168.49.2:10249

I0309 12:27:00.920510   46640 kubeadm.go:822] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.19.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
 config:
{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0309 12:27:00.920731   46640 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.19.4
I0309 12:27:00.940307   46640 binaries.go:44] Found k8s binaries, skipping transfer
I0309 12:27:00.940462   46640 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0309 12:27:00.957551   46640 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
I0309 12:27:00.990527   46640 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I0309 12:27:01.027065   46640 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1787 bytes)
I0309 12:27:01.061890   46640 ssh_runner.go:148] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0309 12:27:01.067332   46640 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2   control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I0309 12:27:01.094713   46640 certs.go:52] Setting up /home/vineet/.minikube/profiles/minikube for IP: 192.168.49.2
I0309 12:27:01.094861   46640 certs.go:169] skipping minikubeCA CA generation: /home/vineet/.minikube/ca.key
I0309 12:27:01.094959   46640 certs.go:169] skipping proxyClientCA CA generation: /home/vineet/.minikube/proxy-client-ca.key
I0309 12:27:01.095127   46640 certs.go:269] skipping minikube-user signed cert generation: /home/vineet/.minikube/profiles/minikube/client.key
I0309 12:27:01.095257   46640 certs.go:269] skipping minikube signed cert generation: /home/vineet/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0309 12:27:01.095355   46640 certs.go:269] skipping aggregator signed cert generation: /home/vineet/.minikube/profiles/minikube/proxy-client.key
I0309 12:27:01.095794   46640 certs.go:348] found cert: /home/vineet/.minikube/certs/home/vineet/.minikube/certs/ca-key.pem (1675 bytes)
I0309 12:27:01.095998   46640 certs.go:348] found cert: /home/vineet/.minikube/certs/home/vineet/.minikube/certs/ca.pem (1078 bytes)
I0309 12:27:01.096154   46640 certs.go:348] found cert: /home/vineet/.minikube/certs/home/vineet/.minikube/certs/cert.pem (1123 bytes)
I0309 12:27:01.096274   46640 certs.go:348] found cert: /home/vineet/.minikube/certs/home/vineet/.minikube/certs/key.pem (1679 bytes)
I0309 12:27:01.099449   46640 ssh_runner.go:215] scp /home/vineet/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0309 12:27:01.145587   46640 ssh_runner.go:215] scp /home/vineet/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0309 12:27:01.190622   46640 ssh_runner.go:215] scp /home/vineet/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0309 12:27:01.233493   46640 ssh_runner.go:215] scp /home/vineet/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0309 12:27:01.275004   46640 ssh_runner.go:215] scp /home/vineet/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0309 12:27:01.322852   46640 ssh_runner.go:215] scp /home/vineet/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0309 12:27:01.370988   46640 ssh_runner.go:215] scp /home/vineet/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0309 12:27:01.415250   46640 ssh_runner.go:215] scp /home/vineet/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0309 12:27:01.459329   46640 ssh_runner.go:215] scp /home/vineet/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0309 12:27:01.508715   46640 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I0309 12:27:01.545309   46640 ssh_runner.go:148] Run: openssl version
I0309 12:27:01.557897   46640 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0309 12:27:01.577645   46640 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0309 12:27:01.588361   46640 certs.go:389] hashing: -rw-r--r-- 1 root root 1111 Dec  4 10:57 /usr/share/ca-certificates/minikubeCA.pem
I0309 12:27:01.588463   46640 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0309 12:27:01.600076   46640 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0309 12:27:01.614401   46640 kubeadm.go:324] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[ambassador:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]}
I0309 12:27:01.614986   46640 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0309 12:27:01.690297   46640 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0309 12:27:01.709354   46640 kubeadm.go:335] found existing configuration files, will attempt cluster restart
I0309 12:27:01.709397   46640 kubeadm.go:527] restartCluster start
I0309 12:27:01.709506   46640 ssh_runner.go:148] Run: sudo test -d /data/minikube
I0309 12:27:01.727166   46640 kubeadm.go:122] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:

stderr:
I0309 12:27:01.728888   46640 kubeconfig.go:117] verify returned: extract IP: "minikube" does not appear in /home/vineet/.kube/config
I0309 12:27:01.729165   46640 kubeconfig.go:128] "minikube" context is missing from /home/vineet/.kube/config - will repair!
I0309 12:27:01.729873   46640 lock.go:36] WriteFile acquiring /home/vineet/.kube/config: {Name:mked609b8334433c2f320799e75acf3d151a607f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0309 12:27:01.734580   46640 ssh_runner.go:148] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0309 12:27:01.752363   46640 api_server.go:146] Checking apiserver status ...
I0309 12:27:01.752567   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
W0309 12:27:01.785235   46640 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

stderr:
I0309 12:27:01.785320   46640 kubeadm.go:506] needs reconfigure: apiserver in state Stopped
I0309 12:27:01.785349   46640 kubeadm.go:945] stopping kube-system containers ...
I0309 12:27:01.785478   46640 ssh_runner.go:148] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0309 12:27:01.879761   46640 docker.go:229] Stopping containers: [a1e3d5680c5a cda616c7252c b790751ef768 b67576f01862 1719a4165353 b18ed6e1c273 d8c09960690f 2ee2650622b1 0b4ff22eade2 51289fa2dc53 3a657c51d21c 982e66767e3e 4e2f336d2bc0 19d3773c3d20 92607aff0629 5c73a0ce5a17 2a3b0c389b58 b1b1bd5d84b4 114bdae2eb9a 99bc42a32b0c c9074a52fe0f 6f80b4ea4d21 edbb7b1dee41 1caa1c797b78 931dda362eda edbccb75a7f9 53a3785fad51]
I0309 12:27:01.879980   46640 ssh_runner.go:148] Run: docker stop a1e3d5680c5a cda616c7252c b790751ef768 b67576f01862 1719a4165353 b18ed6e1c273 d8c09960690f 2ee2650622b1 0b4ff22eade2 51289fa2dc53 3a657c51d21c 982e66767e3e 4e2f336d2bc0 19d3773c3d20 92607aff0629 5c73a0ce5a17 2a3b0c389b58 b1b1bd5d84b4 114bdae2eb9a 99bc42a32b0c c9074a52fe0f 6f80b4ea4d21 edbb7b1dee41 1caa1c797b78 931dda362eda edbccb75a7f9 53a3785fad51
I0309 12:27:01.961606   46640 ssh_runner.go:148] Run: sudo systemctl stop kubelet
I0309 12:27:01.987166   46640 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0309 12:27:02.003526   46640 kubeadm.go:150] found existing configuration files:
-rw------- 1 root root 5615 Dec  4 10:58 /etc/kubernetes/admin.conf
-rw------- 1 root root 5632 Mar  8 13:43 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1971 Dec  4 10:59 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5576 Mar  8 13:43 /etc/kubernetes/scheduler.conf

I0309 12:27:02.003709   46640 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0309 12:27:02.022780   46640 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0309 12:27:02.038303   46640 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0309 12:27:02.052906   46640 kubeadm.go:161] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:

stderr:
I0309 12:27:02.053039   46640 ssh_runner.go:148] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0309 12:27:02.068045   46640 ssh_runner.go:148] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0309 12:27:02.082610   46640 kubeadm.go:161] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:

stderr:
I0309 12:27:02.082845   46640 ssh_runner.go:148] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0309 12:27:02.099757   46640 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0309 12:27:02.119100   46640 kubeadm.go:603] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0309 12:27:02.119208   46640 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I0309 12:27:02.448469   46640 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I0309 12:27:04.347577   46640 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.899019145s)
I0309 12:27:04.347676   46640 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I0309 12:27:04.893687   46640 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I0309 12:27:05.307371   46640 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I0309 12:27:05.657826   46640 api_server.go:48] waiting for apiserver process to appear ...
I0309 12:27:05.657938   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:06.192590   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:06.692565   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:07.192563   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:07.692584   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:08.192461   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:08.692567   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:09.192631   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:09.692587   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:10.192684   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:10.692531   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:11.192544   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:11.692280   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:12.192189   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:12.692276   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:13.192280   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:13.692305   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:14.192316   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:14.692306   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:14.865822   46640 api_server.go:68] duration metric: took 9.207987122s to wait for apiserver process to appear ...
I0309 12:27:14.866449   46640 api_server.go:84] waiting for apiserver healthz status ...
I0309 12:27:14.866506   46640 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0309 12:27:14.867242   46640 api_server.go:231] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
I0309 12:27:15.367661   46640 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0309 12:27:26.892054   46640 api_server.go:241] https://192.168.49.2:8443/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W0309 12:27:26.892100   46640 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I0309 12:27:27.367730   46640 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0309 12:27:27.386759   46640 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0309 12:27:27.386834   46640 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0309 12:27:27.867733   46640 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0309 12:27:27.890411   46640 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0309 12:27:27.890587   46640 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0309 12:27:28.367648   46640 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0309 12:27:28.385637   46640 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0309 12:27:28.385754   46640 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0309 12:27:28.867663   46640 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0309 12:27:28.884410   46640 api_server.go:241] https://192.168.49.2:8443/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
W0309 12:27:28.884500   46640 api_server.go:99] status: https://192.168.49.2:8443/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check failed
I0309 12:27:29.367692   46640 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0309 12:27:29.397000   46640 api_server.go:241] https://192.168.49.2:8443/healthz returned 200:
ok
I0309 12:27:29.421109   46640 api_server.go:137] control plane version: v1.19.4
I0309 12:27:29.421155   46640 api_server.go:127] duration metric: took 14.554672213s to wait for apiserver health ...
I0309 12:27:29.421188   46640 cni.go:74] Creating CNI manager for ""
I0309 12:27:29.421213   46640 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I0309 12:27:29.421240   46640 system_pods.go:41] waiting for kube-system pods to appear ...
I0309 12:27:29.514043   46640 system_pods.go:57] 7 kube-system pods found
I0309 12:27:29.514175   46640 system_pods.go:59] "coredns-f9fd979d6-b96tv" [ce39a9cd-a11a-4034-bb11-e1c257970893] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0309 12:27:29.514245   46640 system_pods.go:59] "etcd-minikube" [0157ffa7-f6a3-4970-a274-d8900d83b8d5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0309 12:27:29.514298   46640 system_pods.go:59] "kube-apiserver-minikube" [3af8a2d0-05a5-4501-bf3c-a281d9158ec9] Running
I0309 12:27:29.514343   46640 system_pods.go:59] "kube-controller-manager-minikube" [0bb2d5b8-7bcd-4507-9e4d-49dbb8353cc2] Running
I0309 12:27:29.514388   46640 system_pods.go:59] "kube-proxy-gtbfh" [c8fbec3c-e66a-4ecb-99de-a8431edd4d1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I0309 12:27:29.514458   46640 system_pods.go:59] "kube-scheduler-minikube" [74636edb-336c-4eba-89bd-c249c9c75298] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0309 12:27:29.514534   46640 system_pods.go:59] "storage-provisioner" [a073288b-4463-469d-909e-88c6db91c5a6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I0309 12:27:29.514609   46640 system_pods.go:72] duration metric: took 93.341976ms to wait for pod list to return data ...
I0309 12:27:29.514674   46640 node_conditions.go:101] verifying NodePressure condition ...
I0309 12:27:29.526358   46640 node_conditions.go:121] node storage ephemeral capacity is 57410400Ki
I0309 12:27:29.526417   46640 node_conditions.go:122] node cpu capacity is 4
I0309 12:27:29.526451   46640 node_conditions.go:104] duration metric: took 11.738758ms to run NodePressure ...
I0309 12:27:29.526493   46640 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I0309 12:27:30.831880   46640 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.305339574s)
I0309 12:27:30.831974   46640 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0309 12:27:30.883274   46640 ops.go:34] apiserver oom_adj: -16
I0309 12:27:30.883314   46640 kubeadm.go:531] restartCluster took 29.173892042s
I0309 12:27:30.883344   46640 kubeadm.go:326] StartCluster complete in 29.268961786s
I0309 12:27:30.883380   46640 settings.go:127] acquiring lock: {Name:mk2f3b028bca7768039589cb41709df8dd202a23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0309 12:27:30.883654   46640 settings.go:135] Updating kubeconfig:  /home/vineet/.kube/config
I0309 12:27:30.884577   46640 lock.go:36] WriteFile acquiring /home/vineet/.kube/config: {Name:mked609b8334433c2f320799e75acf3d151a607f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0309 12:27:30.885019   46640 start.go:198] Will wait 6m0s for node up to 
I0309 12:27:30.886750   46640 out.go:110] πŸ”Ž  Verifying Kubernetes components...
πŸ”Ž  Verifying Kubernetes components...
I0309 12:27:30.885267   46640 addons.go:371] enableAddons start: toEnable=map[ambassador:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
I0309 12:27:30.885400   46640 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.4/kubectl scale deployment --replicas=1 coredns -n=kube-system
I0309 12:27:30.886881   46640 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service kubelet
I0309 12:27:30.886904   46640 addons.go:55] Setting storage-provisioner=true in profile "minikube"
I0309 12:27:30.886951   46640 addons.go:131] Setting addon storage-provisioner=true in "minikube"
I0309 12:27:30.886967   46640 addons.go:55] Setting default-storageclass=true in profile "minikube"
I0309 12:27:30.887002   46640 addons.go:274] enableOrDisableStorageClasses default-storageclass=true on "minikube"
W0309 12:27:30.886971   46640 addons.go:140] addon storage-provisioner should already be in state true
I0309 12:27:30.887095   46640 host.go:66] Checking if "minikube" exists ...
I0309 12:27:30.887751   46640 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0309 12:27:30.888351   46640 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0309 12:27:31.012057   46640 addons.go:243] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0309 12:27:31.012111   46640 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0309 12:27:31.012200   46640 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0309 12:27:31.050546   46640 addons.go:131] Setting addon default-storageclass=true in "minikube"
W0309 12:27:31.050603   46640 addons.go:140] addon default-storageclass should already be in state true
I0309 12:27:31.050651   46640 host.go:66] Checking if "minikube" exists ...
I0309 12:27:31.051510   46640 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I0309 12:27:31.131267   46640 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0309 12:27:31.169798   46640 addons.go:243] installing /etc/kubernetes/addons/storageclass.yaml
I0309 12:27:31.169872   46640 ssh_runner.go:215] scp deploy/addons/storageclass/storageclass.yaml.tmpl --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0309 12:27:31.170011   46640 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0309 12:27:31.265898   46640 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/vineet/.minikube/machines/minikube/id_rsa Username:docker}
I0309 12:27:31.322601   46640 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0309 12:27:31.378148   46640 start.go:553] successfully scaled coredns replicas to 1
I0309 12:27:31.378199   46640 api_server.go:48] waiting for apiserver process to appear ...
I0309 12:27:31.378295   46640 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0309 12:27:31.469688   46640 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0309 12:27:32.491489   46640 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.168826041s)
I0309 12:27:32.491692   46640 ssh_runner.go:188] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.113368559s)
I0309 12:27:32.491737   46640 api_server.go:68] duration metric: took 1.606680497s to wait for apiserver process to appear ...
I0309 12:27:32.491762   46640 api_server.go:84] waiting for apiserver healthz status ...
I0309 12:27:32.491785   46640 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0309 12:27:32.491794   46640 ssh_runner.go:188] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.022003906s)
I0309 12:27:32.495386   46640 out.go:110] 🌟  Enabled addons: storage-provisioner, default-storageclass
🌟  Enabled addons: storage-provisioner, default-storageclass
I0309 12:27:32.495457   46640 addons.go:373] enableAddons completed in 1.610202367s
I0309 12:27:32.506642   46640 api_server.go:241] https://192.168.49.2:8443/healthz returned 200:
ok
I0309 12:27:32.507904   46640 api_server.go:137] control plane version: v1.19.4
I0309 12:27:32.507961   46640 api_server.go:127] duration metric: took 16.178024ms to wait for apiserver health ...
I0309 12:27:32.508013   46640 system_pods.go:41] waiting for kube-system pods to appear ...
I0309 12:27:32.521375   46640 system_pods.go:57] 7 kube-system pods found
I0309 12:27:32.521450   46640 system_pods.go:59] "coredns-f9fd979d6-b96tv" [ce39a9cd-a11a-4034-bb11-e1c257970893] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I0309 12:27:32.521503   46640 system_pods.go:59] "etcd-minikube" [0157ffa7-f6a3-4970-a274-d8900d83b8d5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I0309 12:27:32.521545   46640 system_pods.go:59] "kube-apiserver-minikube" [3af8a2d0-05a5-4501-bf3c-a281d9158ec9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I0309 12:27:32.521585   46640 system_pods.go:59] "kube-controller-manager-minikube" [0bb2d5b8-7bcd-4507-9e4d-49dbb8353cc2] Running
I0309 12:27:32.521625   46640 system_pods.go:59] "kube-proxy-gtbfh" [c8fbec3c-e66a-4ecb-99de-a8431edd4d1e] Running
I0309 12:27:32.521665   46640 system_pods.go:59] "kube-scheduler-minikube" [74636edb-336c-4eba-89bd-c249c9c75298] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I0309 12:27:32.521706   46640 system_pods.go:59] "storage-provisioner" [a073288b-4463-469d-909e-88c6db91c5a6] Running
I0309 12:27:32.521745   46640 system_pods.go:72] duration metric: took 13.707216ms to wait for pod list to return data ...
I0309 12:27:32.521781   46640 kubeadm.go:474] duration metric: took 1.636723246s to wait for : map[apiserver:true system_pods:true] ...
I0309 12:27:32.521836   46640 node_conditions.go:101] verifying NodePressure condition ...
I0309 12:27:32.527943   46640 node_conditions.go:121] node storage ephemeral capacity is 57410400Ki
I0309 12:27:32.528006   46640 node_conditions.go:122] node cpu capacity is 4
I0309 12:27:32.528049   46640 node_conditions.go:104] duration metric: took 6.177482ms to run NodePressure ...
I0309 12:27:32.528084   46640 start.go:203] waiting for startup goroutines ...
I0309 12:27:32.644538   46640 start.go:461] kubectl: 1.19.3, cluster: 1.19.4 (minor skew: 0)
I0309 12:27:32.646518   46640 out.go:110] πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Docker Info vineet@vineet-laptop:~$ minikube ssh docker@minikube:~$ docker info Client: Debug Mode: false

Server: Containers: 28 Running: 14 Paused: 0 Stopped: 14 Images: 10 Server Version: 19.03.13 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 8fba4e9a7d01810a393d5d25a3621dc101981175 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 5.4.0-48-generic Operating System: Ubuntu 20.04.1 LTS (containerized) OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 7.665GiB Name: minikube ID: 63AM:Z53V:4ZSD:3TMN:NVW7:P34T:XJUL:Q6RE:YNO4:ZCRC:AHWL:2CYK Docker Root Dir: /var/lib/docker Debug Mode: false Registry: https://index.docker.io/v1/ Labels: provider=docker Experimental: false Insecure Registries: 10.96.0.0/12 127.0.0.0/8 Live Restore Enabled: false

WARNING: No swap limit support

afbjorklund commented 3 years ago

I think it only updates at start, so you can't change it for a running cluster. There you have to edit the docker.service (ExecStart)

vineetbhardwaj80 commented 3 years ago

Thanks, I could not find docker.service file used to configure docker daemon in Minkube. Could you kindly suggest.

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 3 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

medyagh commented 3 years ago

@vineetbhardwaj80 the docker service file can be found using

minikube ssh sudo systemctl status docker

/lib/systemd/system/docker.service

does that help?