kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.14k stars 4.86k forks source link

error message kubectl info: exec: exit status 1 #9455

Closed darkn3rd closed 3 years ago

darkn3rd commented 3 years ago

Steps to reproduce the issue:

  1. On Ubuntu 20.04.1, stop AppArmor sudo systemctl stop apparmor && sudo systemctl disable apparmor
  2. minikube start E1012 17:04:57.003973 100058 start.go:240] kubectl info: exec: exit status 1
  3. kubectl get pods F1012 17:05:28.971138 100368 get.go:158] Get http://localhost:8080/api/v1beta1/pods?namespace=default: dial tcp 127.0.0.1:8080: connection refused

Full output of failed command:

I1012 17:10:56.709568  101069 out.go:191] Setting JSON to false
I1012 17:10:56.730604  101069 start.go:102] hostinfo: {"hostname":"NUC7i5DNHE","uptime":140982,"bootTime":1602406874,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-48-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"3723d2d3-ad5d-4b0f-b111-66951c90ae68"}
I1012 17:10:56.731053  101069 start.go:112] virtualization: kvm host
I1012 17:10:56.748362  101069 out.go:109] ๐Ÿ˜„  minikube v1.13.1 on Ubuntu 20.04
๐Ÿ˜„  minikube v1.13.1 on Ubuntu 20.04
I1012 17:10:56.749356  101069 driver.go:287] Setting default libvirt URI to qemu:///system
I1012 17:10:56.749453  101069 global.go:102] Querying for installed drivers using PATH=/home/joaquin/.minikube/bin:/home/joaquin/.nvm/versions/node/v14.13.1/bin:/home/joaquin/.rbenv/shims:/home/joaquin/.rbenv/bin:/home/joaquin/.pyenv/plugins/pyenv-virtualenv/shims:/home/joaquin/.pyenv/shims:/home/joaquin/.pyenv/bin:/home/joaquin/.pyenv/plugins/pyenv-virtualenv/shims:/home/joaquin/.pyenv/shims:/home/joaquin/.pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
I1012 17:10:56.749787  101069 global.go:110] virtualbox priority: 5, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
I1012 17:10:56.750000  101069 global.go:110] vmware priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
W1012 17:10:56.804995  101069 docker.go:102] docker returned error: exit status 1
I1012 17:10:56.805223  101069 global.go:110] docker priority: 8, state: {Installed:true Healthy:false Running:true NeedsImprovement:false Error:"docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version: dial unix /var/run/docker.sock: connect: permission denied Fix:Add your user to the 'docker' group: 'sudo usermod -aG docker $USER && newgrp docker' Doc:https://docs.docker.com/engine/install/linux-postinstall/}
I1012 17:10:56.831797  101069 global.go:110] kvm2 priority: 7, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I1012 17:10:56.831904  101069 global.go:110] none priority: 3, state: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:the 'none' driver must be run as the root user Fix:For non-root usage, try the newer 'docker' driver Doc:}
I1012 17:10:56.831959  101069 global.go:110] podman priority: 2, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I1012 17:10:56.831980  101069 driver.go:235] not recommending "docker" due to health: "docker version --format {{.Server.Os}}-{{.Server.Version}}" exit status 1: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/version: dial unix /var/run/docker.sock: connect: permission denied
I1012 17:10:56.831998  101069 driver.go:235] not recommending "none" due to health: the 'none' driver must be run as the root user
I1012 17:10:56.832095  101069 driver.go:269] Picked: kvm2
I1012 17:10:56.832109  101069 driver.go:270] Alternatives: []
I1012 17:10:56.832115  101069 driver.go:271] Rejects: [virtualbox vmware docker none podman]
I1012 17:10:56.847076  101069 out.go:109] โœจ  Automatically selected the kvm2 driver
โœจ  Automatically selected the kvm2 driver
I1012 17:10:56.847121  101069 start.go:246] selected driver: kvm2
I1012 17:10:56.847135  101069 start.go:653] validating driver "kvm2" against <nil>
I1012 17:10:56.847167  101069 start.go:664] status for kvm2: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I1012 17:10:56.847221  101069 install.go:50] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1012 17:10:56.847385  101069 install.go:116] Validating docker-machine-driver-kvm2, PATH=/home/joaquin/.minikube/bin:/home/joaquin/.nvm/versions/node/v14.13.1/bin:/home/joaquin/.rbenv/shims:/home/joaquin/.rbenv/bin:/home/joaquin/.pyenv/plugins/pyenv-virtualenv/shims:/home/joaquin/.pyenv/shims:/home/joaquin/.pyenv/bin:/home/joaquin/.pyenv/plugins/pyenv-virtualenv/shims:/home/joaquin/.pyenv/shims:/home/joaquin/.pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
I1012 17:10:56.860348  101069 start_flags.go:224] no existing cluster config was found, will generate one from the flags 
I1012 17:10:56.860730  101069 start_flags.go:242] Using suggested 6000MB memory alloc based on sys=31868MB, container=0MB
I1012 17:10:56.860836  101069 start_flags.go:617] Wait components to verify : map[apiserver:true system_pods:true]
I1012 17:10:56.860860  101069 cni.go:74] Creating CNI manager for ""
I1012 17:10:56.860868  101069 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I1012 17:10:56.860876  101069 start_flags.go:348] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s}
I1012 17:10:56.860939  101069 iso.go:119] acquiring lock: {Name:mk600d8074995300ca7824add4d8f66f3ad473c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1012 17:10:56.866848  101069 out.go:109] ๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿ‘  Starting control plane node minikube in cluster minikube
I1012 17:10:56.866879  101069 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker
I1012 17:10:56.866912  101069 preload.go:105] Found local preload: /home/joaquin/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
I1012 17:10:56.866920  101069 cache.go:53] Caching tarball of preloaded images
I1012 17:10:56.866948  101069 preload.go:131] Found /home/joaquin/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1012 17:10:56.866957  101069 cache.go:56] Finished verifying existence of preloaded tar for  v1.19.2 on docker
I1012 17:10:56.867162  101069 profile.go:150] Saving config to /home/joaquin/.minikube/profiles/minikube/config.json ...
I1012 17:10:56.867185  101069 lock.go:35] WriteFile acquiring /home/joaquin/.minikube/profiles/minikube/config.json: {Name:mk804ea66876a37fe3651f8607271b9f35d7b1fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1012 17:10:56.867328  101069 cache.go:182] Successfully downloaded all kic artifacts
I1012 17:10:56.867351  101069 start.go:314] acquiring machines lock for minikube: {Name:mkcb627a3c264cf4db7a548ce0875f16ef9be94c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1012 17:10:56.867408  101069 start.go:318] acquired machines lock for "minikube" in 47.121ยตs
I1012 17:10:56.867423  101069 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.13.1.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}
I1012 17:10:56.867470  101069 start.go:127] createHost starting for "" (driver="kvm2")
I1012 17:10:56.873356  101069 out.go:109] ๐Ÿ”ฅ  Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
๐Ÿ”ฅ  Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
I1012 17:10:56.873508  101069 main.go:115] libmachine: Found binary path at /home/joaquin/.minikube/bin/docker-machine-driver-kvm2
I1012 17:10:56.873558  101069 main.go:115] libmachine: Launching plugin server for driver kvm2
I1012 17:10:56.885601  101069 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:45029
I1012 17:10:56.885892  101069 main.go:115] libmachine: () Calling .GetVersion
I1012 17:10:56.886366  101069 main.go:115] libmachine: Using API Version  1
I1012 17:10:56.886385  101069 main.go:115] libmachine: () Calling .SetConfigRaw
I1012 17:10:56.886619  101069 main.go:115] libmachine: () Calling .GetMachineName
I1012 17:10:56.886726  101069 main.go:115] libmachine: (minikube) Calling .GetMachineName
I1012 17:10:56.886805  101069 main.go:115] libmachine: (minikube) Calling .DriverName
I1012 17:10:56.886894  101069 start.go:164] libmachine.API.Create for "minikube" (driver="kvm2")
I1012 17:10:56.886920  101069 client.go:165] LocalClient.Create starting
I1012 17:10:56.886948  101069 main.go:115] libmachine: Reading certificate data from /home/joaquin/.minikube/certs/ca.pem
I1012 17:10:56.886978  101069 main.go:115] libmachine: Decoding PEM data...
I1012 17:10:56.886995  101069 main.go:115] libmachine: Parsing certificate...
I1012 17:10:56.887111  101069 main.go:115] libmachine: Reading certificate data from /home/joaquin/.minikube/certs/cert.pem
I1012 17:10:56.887133  101069 main.go:115] libmachine: Decoding PEM data...
I1012 17:10:56.887149  101069 main.go:115] libmachine: Parsing certificate...
I1012 17:10:56.887192  101069 main.go:115] libmachine: Running pre-create checks...
I1012 17:10:56.887203  101069 main.go:115] libmachine: (minikube) Calling .PreCreateCheck
I1012 17:10:56.887415  101069 main.go:115] libmachine: (minikube) Calling .GetConfigRaw
I1012 17:10:56.887670  101069 main.go:115] libmachine: Creating machine...
I1012 17:10:56.887683  101069 main.go:115] libmachine: (minikube) Calling .Create
I1012 17:10:56.887757  101069 main.go:115] libmachine: (minikube) Creating KVM machine...
I1012 17:10:57.049206  101069 main.go:115] libmachine: (minikube) Setting up store path in /home/joaquin/.minikube/machines/minikube ...
I1012 17:10:57.049301  101069 main.go:115] libmachine: (minikube) Building disk image from file:///home/joaquin/.minikube/cache/iso/minikube-v1.13.1.iso
I1012 17:10:57.049461  101069 main.go:115] libmachine: (minikube) DBG | ERROR: logging before flag.Parse: I1012 17:10:57.049341  101098 common.go:99] Making disk image using store path: /home/joaquin/.minikube
I1012 17:10:57.049614  101069 main.go:115] libmachine: (minikube) Downloading /home/joaquin/.minikube/cache/boot2docker.iso from file:///home/joaquin/.minikube/cache/iso/minikube-v1.13.1.iso...
I1012 17:10:57.167939  101069 main.go:115] libmachine: (minikube) DBG | ERROR: logging before flag.Parse: I1012 17:10:57.167864  101098 common.go:106] Creating ssh key: /home/joaquin/.minikube/machines/minikube/id_rsa...
I1012 17:10:57.241802  101069 main.go:115] libmachine: (minikube) DBG | ERROR: logging before flag.Parse: I1012 17:10:57.241702  101098 common.go:112] Creating raw disk image: /home/joaquin/.minikube/machines/minikube/minikube.rawdisk...
I1012 17:10:57.241839  101069 main.go:115] libmachine: (minikube) DBG | Writing magic tar header
I1012 17:10:57.241856  101069 main.go:115] libmachine: (minikube) DBG | Writing SSH key tar header
I1012 17:10:57.241869  101069 main.go:115] libmachine: (minikube) DBG | ERROR: logging before flag.Parse: I1012 17:10:57.241801  101098 common.go:126] Fixing permissions on /home/joaquin/.minikube/machines/minikube ...
I1012 17:10:57.241889  101069 main.go:115] libmachine: (minikube) DBG | Checking permissions on dir: /home/joaquin/.minikube/machines/minikube
I1012 17:10:57.241919  101069 main.go:115] libmachine: (minikube) Setting executable bit set on /home/joaquin/.minikube/machines/minikube (perms=drwx------)
I1012 17:10:57.241946  101069 main.go:115] libmachine: (minikube) Setting executable bit set on /home/joaquin/.minikube/machines (perms=drwxrwxr-x)
I1012 17:10:57.241956  101069 main.go:115] libmachine: (minikube) DBG | Checking permissions on dir: /home/joaquin/.minikube/machines
I1012 17:10:57.241972  101069 main.go:115] libmachine: (minikube) DBG | Checking permissions on dir: /home/joaquin/.minikube
I1012 17:10:57.241989  101069 main.go:115] libmachine: (minikube) DBG | Checking permissions on dir: /home/joaquin
I1012 17:10:57.242000  101069 main.go:115] libmachine: (minikube) DBG | Checking permissions on dir: /home
I1012 17:10:57.242010  101069 main.go:115] libmachine: (minikube) DBG | Skipping /home - not owner
I1012 17:10:57.242032  101069 main.go:115] libmachine: (minikube) Setting executable bit set on /home/joaquin/.minikube (perms=drwxrwxr-x)
I1012 17:10:57.242051  101069 main.go:115] libmachine: (minikube) Setting executable bit set on /home/joaquin (perms=drwxr-xr-x)
I1012 17:10:57.242068  101069 main.go:115] libmachine: (minikube) Creating domain...
I1012 17:10:57.249440  101069 main.go:115] libmachine: (minikube) Creating network...
I1012 17:10:57.251291  101069 main.go:115] libmachine: (minikube) Ensuring networks are active...
I1012 17:10:57.254279  101069 main.go:115] libmachine: (minikube) Ensuring network default is active
I1012 17:10:57.255627  101069 main.go:115] libmachine: (minikube) Ensuring network minikube-net is active
I1012 17:10:57.258424  101069 main.go:115] libmachine: (minikube) Getting domain xml...
I1012 17:10:57.261339  101069 main.go:115] libmachine: (minikube) Creating domain...
I1012 17:10:58.568237  101069 main.go:115] libmachine: (minikube) Waiting to get IP...
I1012 17:10:58.571341  101069 main.go:115] libmachine: (minikube) DBG | Waiting for machine to come up 0/40
I1012 17:11:01.573253  101069 main.go:115] libmachine: (minikube) DBG | Waiting for machine to come up 1/40
I1012 17:11:04.579969  101069 main.go:115] libmachine: (minikube) DBG | Waiting for machine to come up 2/40
I1012 17:11:07.586688  101069 main.go:115] libmachine: (minikube) Found IP for machine: 192.168.39.64
I1012 17:11:07.586769  101069 main.go:115] libmachine: (minikube) Waiting for SSH to be available...
I1012 17:11:07.586804  101069 main.go:115] libmachine: (minikube) DBG | Getting to WaitForSSH function...
I1012 17:11:07.594600  101069 main.go:115] libmachine: (minikube) DBG | Using SSH client type: external
I1012 17:11:07.594712  101069 main.go:115] libmachine: (minikube) DBG | Using SSH private key: /home/joaquin/.minikube/machines/minikube/id_rsa (-rw-------)
I1012 17:11:07.594875  101069 main.go:115] libmachine: (minikube) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.64 -o IdentitiesOnly=yes -i /home/joaquin/.minikube/machines/minikube/id_rsa -p 22] /usr/bin/ssh <nil>}
I1012 17:11:07.594932  101069 main.go:115] libmachine: (minikube) DBG | About to run SSH command:
I1012 17:11:07.594993  101069 main.go:115] libmachine: (minikube) DBG | exit 0
I1012 17:11:07.750767  101069 main.go:115] libmachine: (minikube) DBG | SSH cmd err, output: <nil>: 
I1012 17:11:07.752463  101069 main.go:115] libmachine: (minikube) KVM machine creation complete!
I1012 17:11:07.752831  101069 main.go:115] libmachine: (minikube) Calling .GetConfigRaw
I1012 17:11:07.754199  101069 main.go:115] libmachine: (minikube) Calling .DriverName
I1012 17:11:07.754661  101069 main.go:115] libmachine: (minikube) Calling .DriverName
I1012 17:11:07.755065  101069 main.go:115] libmachine: Waiting for machine to be running, this may take a few minutes...
I1012 17:11:07.755117  101069 main.go:115] libmachine: (minikube) Calling .GetState
I1012 17:11:07.758498  101069 main.go:115] libmachine: Detecting operating system of created instance...
I1012 17:11:07.758573  101069 main.go:115] libmachine: Waiting for SSH to be available...
I1012 17:11:07.758609  101069 main.go:115] libmachine: Getting to WaitForSSH function...
I1012 17:11:07.758657  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:07.764424  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:07.765533  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:07.765940  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:07.766495  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:07.767102  101069 main.go:115] libmachine: Using SSH client type: native
I1012 17:11:07.767523  101069 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
I1012 17:11:07.767570  101069 main.go:115] libmachine: About to run SSH command:
exit 0
I1012 17:11:07.916377  101069 main.go:115] libmachine: SSH cmd err, output: <nil>: 
I1012 17:11:07.916452  101069 main.go:115] libmachine: Detecting the provisioner...
I1012 17:11:07.916523  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:07.924286  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:07.924761  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:07.925164  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:07.925578  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:07.926021  101069 main.go:115] libmachine: Using SSH client type: native
I1012 17:11:07.926499  101069 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
I1012 17:11:07.926552  101069 main.go:115] libmachine: About to run SSH command:
cat /etc/os-release
I1012 17:11:08.076721  101069 main.go:115] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2019.02.11
ID=buildroot
VERSION_ID=2019.02.11
PRETTY_NAME="Buildroot 2019.02.11"

I1012 17:11:08.076871  101069 main.go:115] libmachine: found compatible host: buildroot
I1012 17:11:08.076913  101069 main.go:115] libmachine: Provisioning with buildroot...
I1012 17:11:08.076955  101069 main.go:115] libmachine: (minikube) Calling .GetMachineName
I1012 17:11:08.077390  101069 buildroot.go:163] provisioning hostname "minikube"
I1012 17:11:08.077559  101069 main.go:115] libmachine: (minikube) Calling .GetMachineName
I1012 17:11:08.078008  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:08.082629  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:08.082971  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:08.083254  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:08.083516  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:08.083888  101069 main.go:115] libmachine: Using SSH client type: native
I1012 17:11:08.084215  101069 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
I1012 17:11:08.084263  101069 main.go:115] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1012 17:11:08.258647  101069 main.go:115] libmachine: SSH cmd err, output: <nil>: minikube

I1012 17:11:08.258734  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:08.265876  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:08.266159  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:08.266480  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:08.266720  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:08.267017  101069 main.go:115] libmachine: Using SSH client type: native
I1012 17:11:08.267365  101069 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
I1012 17:11:08.267436  101069 main.go:115] libmachine: About to run SSH command:

        if ! grep -xq '.*\sminikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
            else 
                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
            fi
        fi
I1012 17:11:08.399351  101069 main.go:115] libmachine: SSH cmd err, output: <nil>: 
I1012 17:11:08.399465  101069 buildroot.go:169] set auth options {CertDir:/home/joaquin/.minikube CaCertPath:/home/joaquin/.minikube/certs/ca.pem CaPrivateKeyPath:/home/joaquin/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/joaquin/.minikube/machines/server.pem ServerKeyPath:/home/joaquin/.minikube/machines/server-key.pem ClientKeyPath:/home/joaquin/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/joaquin/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/joaquin/.minikube}
I1012 17:11:08.399606  101069 buildroot.go:171] setting up certificates
I1012 17:11:08.399643  101069 provision.go:82] configureAuth start
I1012 17:11:08.399693  101069 main.go:115] libmachine: (minikube) Calling .GetMachineName
I1012 17:11:08.400239  101069 main.go:115] libmachine: (minikube) Calling .GetIP
I1012 17:11:08.407156  101069 provision.go:131] copyHostCerts
I1012 17:11:08.407293  101069 exec_runner.go:91] found /home/joaquin/.minikube/ca.pem, removing ...
I1012 17:11:08.407478  101069 exec_runner.go:98] cp: /home/joaquin/.minikube/certs/ca.pem --> /home/joaquin/.minikube/ca.pem (1038 bytes)
I1012 17:11:08.407805  101069 exec_runner.go:91] found /home/joaquin/.minikube/cert.pem, removing ...
I1012 17:11:08.407955  101069 exec_runner.go:98] cp: /home/joaquin/.minikube/certs/cert.pem --> /home/joaquin/.minikube/cert.pem (1078 bytes)
I1012 17:11:08.408263  101069 exec_runner.go:91] found /home/joaquin/.minikube/key.pem, removing ...
I1012 17:11:08.408371  101069 exec_runner.go:98] cp: /home/joaquin/.minikube/certs/key.pem --> /home/joaquin/.minikube/key.pem (1675 bytes)
I1012 17:11:08.408594  101069 provision.go:105] generating server cert: /home/joaquin/.minikube/machines/server.pem ca-key=/home/joaquin/.minikube/certs/ca.pem private-key=/home/joaquin/.minikube/certs/ca-key.pem org=joaquin.minikube san=[192.168.39.64 localhost 127.0.0.1 minikube minikube]
I1012 17:11:08.597301  101069 provision.go:159] copyRemoteCerts
I1012 17:11:08.597363  101069 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1012 17:11:08.597387  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:08.599046  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:08.599155  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:08.599245  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:08.599330  101069 sshutil.go:44] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/joaquin/.minikube/machines/minikube/id_rsa Username:docker}
I1012 17:11:08.681606  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1012 17:11:08.692631  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1038 bytes)
I1012 17:11:08.714136  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/machines/server.pem --> /etc/docker/server.pem (1147 bytes)
I1012 17:11:08.735038  101069 provision.go:85] duration metric: configureAuth took 335.366822ms
I1012 17:11:08.735061  101069 buildroot.go:186] setting minikube options for container-runtime
I1012 17:11:08.735265  101069 main.go:115] libmachine: (minikube) Calling .DriverName
I1012 17:11:08.735557  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:08.737579  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:08.737680  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:08.737777  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:08.737853  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:08.737982  101069 main.go:115] libmachine: Using SSH client type: native
I1012 17:11:08.738109  101069 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
I1012 17:11:08.738122  101069 main.go:115] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1012 17:11:08.871120  101069 main.go:115] libmachine: SSH cmd err, output: <nil>: tmpfs

I1012 17:11:08.871173  101069 buildroot.go:70] root file system type: tmpfs
I1012 17:11:08.871567  101069 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I1012 17:11:08.871627  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:08.876724  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:08.876935  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:08.877163  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:08.877367  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:08.877587  101069 main.go:115] libmachine: Using SSH client type: native
I1012 17:11:08.877839  101069 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
I1012 17:11:08.878066  101069 main.go:115] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1012 17:11:09.031077  101069 main.go:115] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I1012 17:11:09.031169  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:09.033938  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:09.034139  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:09.034282  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:09.034413  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:09.034596  101069 main.go:115] libmachine: Using SSH client type: native
I1012 17:11:09.034798  101069 main.go:115] libmachine: &{{{<nil> 0 [] [] []} docker [0x7b9850] 0x7b9820 <nil>  [] 0s} 192.168.39.64 22 <nil> <nil>}
I1012 17:11:09.034829  101069 main.go:115] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1012 17:11:09.788561  101069 main.go:115] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service โ†’ /usr/lib/systemd/system/docker.service.

I1012 17:11:09.788606  101069 main.go:115] libmachine: Checking connection to Docker...
I1012 17:11:09.788627  101069 main.go:115] libmachine: (minikube) Calling .GetURL
I1012 17:11:09.789913  101069 main.go:115] libmachine: (minikube) DBG | Using libvirt version 6000000
I1012 17:11:09.819035  101069 main.go:115] libmachine: Docker is up and running!
I1012 17:11:09.819057  101069 main.go:115] libmachine: Reticulating splines...
I1012 17:11:09.819066  101069 client.go:168] LocalClient.Create took 12.932138913s
I1012 17:11:09.819089  101069 start.go:172] duration metric: libmachine.API.Create for "minikube" took 12.932192944s
I1012 17:11:09.819102  101069 start.go:268] post-start starting for "minikube" (driver="kvm2")
I1012 17:11:09.819111  101069 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1012 17:11:09.819129  101069 main.go:115] libmachine: (minikube) Calling .DriverName
I1012 17:11:09.819310  101069 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1012 17:11:09.819342  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:09.821052  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:09.821168  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:09.821289  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:09.821394  101069 sshutil.go:44] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/joaquin/.minikube/machines/minikube/id_rsa Username:docker}
I1012 17:11:09.913989  101069 ssh_runner.go:148] Run: cat /etc/os-release
I1012 17:11:09.922125  101069 info.go:100] Remote host: Buildroot 2019.02.11
I1012 17:11:09.922196  101069 filesync.go:118] Scanning /home/joaquin/.minikube/addons for local assets ...
I1012 17:11:09.922333  101069 filesync.go:118] Scanning /home/joaquin/.minikube/files for local assets ...
I1012 17:11:09.922404  101069 start.go:271] post-start completed in 103.288552ms
I1012 17:11:09.922521  101069 main.go:115] libmachine: (minikube) Calling .GetConfigRaw
I1012 17:11:09.923477  101069 main.go:115] libmachine: (minikube) Calling .GetIP
I1012 17:11:09.927747  101069 profile.go:150] Saving config to /home/joaquin/.minikube/profiles/minikube/config.json ...
I1012 17:11:09.928131  101069 start.go:130] duration metric: createHost completed in 13.060646398s
I1012 17:11:09.928166  101069 start.go:81] releasing machines lock for "minikube", held for 13.060744672s
I1012 17:11:09.928240  101069 main.go:115] libmachine: (minikube) Calling .DriverName
I1012 17:11:09.928502  101069 main.go:115] libmachine: (minikube) Calling .GetIP
I1012 17:11:09.931982  101069 main.go:115] libmachine: (minikube) Calling .DriverName
I1012 17:11:09.932217  101069 main.go:115] libmachine: (minikube) Calling .DriverName
I1012 17:11:09.932762  101069 main.go:115] libmachine: (minikube) Calling .DriverName
I1012 17:11:09.933142  101069 ssh_runner.go:148] Run: systemctl --version
I1012 17:11:09.933176  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:09.933210  101069 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I1012 17:11:09.933326  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:09.937919  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:09.938225  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:09.938560  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:09.938570  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:09.938749  101069 sshutil.go:44] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/joaquin/.minikube/machines/minikube/id_rsa Username:docker}
I1012 17:11:09.938921  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:09.939223  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:09.939428  101069 sshutil.go:44] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/joaquin/.minikube/machines/minikube/id_rsa Username:docker}
I1012 17:11:10.176723  101069 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker
I1012 17:11:10.176860  101069 preload.go:105] Found local preload: /home/joaquin/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
I1012 17:11:10.177355  101069 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I1012 17:11:10.250685  101069 docker.go:381] Got preloaded images: 
I1012 17:11:10.250717  101069 docker.go:386] k8s.gcr.io/kube-proxy:v1.19.2 wasn't preloaded
I1012 17:11:10.250820  101069 ssh_runner.go:148] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1012 17:11:10.257467  101069 ssh_runner.go:148] Run: which lz4
I1012 17:11:10.260723  101069 ssh_runner.go:148] Run: stat -c "%s %y" /preloaded.tar.lz4
I1012 17:11:10.266369  101069 ssh_runner.go:205] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:

stderr:
stat: cannot stat '/preloaded.tar.lz4': No such file or directory
I1012 17:11:10.266417  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (509982990 bytes)
I1012 17:11:11.375602  101069 docker.go:347] Took 1.114934 seconds to copy over tarball
I1012 17:11:11.375673  101069 ssh_runner.go:148] Run: sudo tar -I lz4 -C /var -xvf /preloaded.tar.lz4
I1012 17:11:14.135192  101069 ssh_runner.go:188] Completed: sudo tar -I lz4 -C /var -xvf /preloaded.tar.lz4: (2.759491868s)
I1012 17:11:14.135217  101069 ssh_runner.go:99] rm: /preloaded.tar.lz4
I1012 17:11:14.168533  101069 ssh_runner.go:148] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
I1012 17:11:14.173254  101069 ssh_runner.go:215] scp memory --> /var/lib/docker/image/overlay2/repositories.json (3125 bytes)
I1012 17:11:14.180965  101069 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I1012 17:11:14.260806  101069 ssh_runner.go:148] Run: sudo systemctl restart docker
I1012 17:11:16.066331  101069 ssh_runner.go:188] Completed: sudo systemctl restart docker: (1.805437288s)
I1012 17:11:16.066759  101069 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I1012 17:11:16.095267  101069 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I1012 17:11:16.106935  101069 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I1012 17:11:16.119514  101069 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I1012 17:11:16.130380  101069 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I1012 17:11:16.221389  101069 ssh_runner.go:148] Run: sudo systemctl start docker
I1012 17:11:16.228515  101069 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
I1012 17:11:16.266942  101069 out.go:109] ๐Ÿณ  Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
๐Ÿณ  Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
I1012 17:11:16.267040  101069 ssh_runner.go:148] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1012 17:11:16.269603  101069 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.39.1    host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I1012 17:11:16.276203  101069 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker
I1012 17:11:16.276232  101069 preload.go:105] Found local preload: /home/joaquin/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
I1012 17:11:16.276298  101069 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I1012 17:11:16.303889  101069 docker.go:381] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.19.2
k8s.gcr.io/kube-apiserver:v1.19.2
k8s.gcr.io/kube-controller-manager:v1.19.2
k8s.gcr.io/kube-scheduler:v1.19.2
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I1012 17:11:16.303917  101069 docker.go:319] Images already preloaded, skipping extraction
I1012 17:11:16.303985  101069 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I1012 17:11:16.328775  101069 docker.go:381] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.19.2
k8s.gcr.io/kube-apiserver:v1.19.2
k8s.gcr.io/kube-controller-manager:v1.19.2
k8s.gcr.io/kube-scheduler:v1.19.2
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I1012 17:11:16.328807  101069 cache_images.go:74] Images are preloaded, skipping loading
I1012 17:11:16.328885  101069 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I1012 17:11:16.360997  101069 cni.go:74] Creating CNI manager for ""
I1012 17:11:16.361023  101069 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I1012 17:11:16.361037  101069 kubeadm.go:84] Using pod CIDR: 
I1012 17:11:16.361056  101069 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:192.168.39.64 APIServerPort:8443 KubernetesVersion:v1.19.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.64"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.39.64 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I1012 17:11:16.361148  101069 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.39.64
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.39.64
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.39.64"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.19.2
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 192.168.39.64:10249

I1012 17:11:16.361250  101069 kubeadm.go:805] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.19.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.64

[Install]
 config:
{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1012 17:11:16.361310  101069 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.19.2
I1012 17:11:16.367193  101069 binaries.go:43] Found k8s binaries, skipping transfer
I1012 17:11:16.367279  101069 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1012 17:11:16.372575  101069 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (335 bytes)
I1012 17:11:16.381486  101069 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I1012 17:11:16.388311  101069 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1790 bytes)
I1012 17:11:16.395885  101069 ssh_runner.go:148] Run: grep 192.168.39.64    control-plane.minikube.internal$ /etc/hosts
I1012 17:11:16.398573  101069 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.39.64  control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I1012 17:11:16.407345  101069 certs.go:52] Setting up /home/joaquin/.minikube/profiles/minikube for IP: 192.168.39.64
I1012 17:11:16.407400  101069 certs.go:169] skipping minikubeCA CA generation: /home/joaquin/.minikube/ca.key
I1012 17:11:16.407420  101069 certs.go:169] skipping proxyClientCA CA generation: /home/joaquin/.minikube/proxy-client-ca.key
I1012 17:11:16.407474  101069 certs.go:273] generating minikube-user signed cert: /home/joaquin/.minikube/profiles/minikube/client.key
I1012 17:11:16.407481  101069 crypto.go:69] Generating cert /home/joaquin/.minikube/profiles/minikube/client.crt with IP's: []
I1012 17:11:16.551144  101069 crypto.go:157] Writing cert to /home/joaquin/.minikube/profiles/minikube/client.crt ...
I1012 17:11:16.551172  101069 lock.go:35] WriteFile acquiring /home/joaquin/.minikube/profiles/minikube/client.crt: {Name:mk1eb371fc96ea5590156bd0ecd68a549109dae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1012 17:11:16.551319  101069 crypto.go:165] Writing key to /home/joaquin/.minikube/profiles/minikube/client.key ...
I1012 17:11:16.551332  101069 lock.go:35] WriteFile acquiring /home/joaquin/.minikube/profiles/minikube/client.key: {Name:mk8fc6ff2697853fbaed9831653b9c0074581ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1012 17:11:16.551413  101069 certs.go:273] generating minikube signed cert: /home/joaquin/.minikube/profiles/minikube/apiserver.key.b878b390
I1012 17:11:16.551421  101069 crypto.go:69] Generating cert /home/joaquin/.minikube/profiles/minikube/apiserver.crt.b878b390 with IP's: [192.168.39.64 10.96.0.1 127.0.0.1 10.0.0.1]
I1012 17:11:16.735733  101069 crypto.go:157] Writing cert to /home/joaquin/.minikube/profiles/minikube/apiserver.crt.b878b390 ...
I1012 17:11:16.735759  101069 lock.go:35] WriteFile acquiring /home/joaquin/.minikube/profiles/minikube/apiserver.crt.b878b390: {Name:mkbd265c35a272a5e5220285ce142a6ffe07f9a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1012 17:11:16.735913  101069 crypto.go:165] Writing key to /home/joaquin/.minikube/profiles/minikube/apiserver.key.b878b390 ...
I1012 17:11:16.735925  101069 lock.go:35] WriteFile acquiring /home/joaquin/.minikube/profiles/minikube/apiserver.key.b878b390: {Name:mk4b6b828708b539c7f48bfe5373a01b46ebd209 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1012 17:11:16.736011  101069 certs.go:284] copying /home/joaquin/.minikube/profiles/minikube/apiserver.crt.b878b390 -> /home/joaquin/.minikube/profiles/minikube/apiserver.crt
I1012 17:11:16.736062  101069 certs.go:288] copying /home/joaquin/.minikube/profiles/minikube/apiserver.key.b878b390 -> /home/joaquin/.minikube/profiles/minikube/apiserver.key
I1012 17:11:16.736111  101069 certs.go:273] generating aggregator signed cert: /home/joaquin/.minikube/profiles/minikube/proxy-client.key
I1012 17:11:16.736120  101069 crypto.go:69] Generating cert /home/joaquin/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I1012 17:11:16.896329  101069 crypto.go:157] Writing cert to /home/joaquin/.minikube/profiles/minikube/proxy-client.crt ...
I1012 17:11:16.896352  101069 lock.go:35] WriteFile acquiring /home/joaquin/.minikube/profiles/minikube/proxy-client.crt: {Name:mk1dbd712be0f5797aaa6d6956a1bde00c8ad0d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1012 17:11:16.896486  101069 crypto.go:165] Writing key to /home/joaquin/.minikube/profiles/minikube/proxy-client.key ...
I1012 17:11:16.896499  101069 lock.go:35] WriteFile acquiring /home/joaquin/.minikube/profiles/minikube/proxy-client.key: {Name:mk3913bfa6f7b2a27ce95e86f6fa875e90328aa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1012 17:11:16.896641  101069 certs.go:348] found cert: /home/joaquin/.minikube/certs/home/joaquin/.minikube/certs/ca-key.pem (1679 bytes)
I1012 17:11:16.896677  101069 certs.go:348] found cert: /home/joaquin/.minikube/certs/home/joaquin/.minikube/certs/ca.pem (1038 bytes)
I1012 17:11:16.896708  101069 certs.go:348] found cert: /home/joaquin/.minikube/certs/home/joaquin/.minikube/certs/cert.pem (1078 bytes)
I1012 17:11:16.896734  101069 certs.go:348] found cert: /home/joaquin/.minikube/certs/home/joaquin/.minikube/certs/key.pem (1675 bytes)
I1012 17:11:16.897350  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1350 bytes)
I1012 17:11:16.907746  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1012 17:11:16.920592  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1103 bytes)
I1012 17:11:16.934226  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1012 17:11:16.946072  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1066 bytes)
I1012 17:11:16.956303  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1012 17:11:16.969158  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1074 bytes)
I1012 17:11:16.984716  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1012 17:11:17.001356  101069 ssh_runner.go:215] scp /home/joaquin/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1066 bytes)
I1012 17:11:17.014648  101069 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I1012 17:11:17.031534  101069 ssh_runner.go:148] Run: openssl version
I1012 17:11:17.037497  101069 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1012 17:11:17.045447  101069 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1012 17:11:17.051913  101069 certs.go:389] hashing: -rw-r--r-- 1 root root 1066 Oct 11 18:47 /usr/share/ca-certificates/minikubeCA.pem
I1012 17:11:17.051982  101069 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1012 17:11:17.056970  101069 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1012 17:11:17.065011  101069 kubeadm.go:324] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.13.1.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.12-snapshot3@sha256:1d687ba53e19dbe5fafe4cc18aa07f269ecc4b7b622f2251b5bf569ddb474e9b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.64 Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s}
I1012 17:11:17.065139  101069 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1012 17:11:17.094917  101069 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1012 17:11:17.100320  101069 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1012 17:11:17.106742  101069 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1012 17:11:17.113701  101069 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1012 17:11:17.113739  101069 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
I1012 17:11:36.246709  101069 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": (19.132946235s)
I1012 17:11:36.246735  101069 cni.go:74] Creating CNI manager for ""
I1012 17:11:36.246747  101069 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I1012 17:11:36.246766  101069 ssh_runner.go:148] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1012 17:11:36.246875  101069 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.19.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1012 17:11:36.246965  101069 ssh_runner.go:148] Run: sudo /var/lib/minikube/binaries/v1.19.2/kubectl label nodes minikube.k8s.io/version=v1.13.1 minikube.k8s.io/commit=1fd1f67f338cbab4b3e5a6e4c71c551f522ca138-dirty minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_10_12T17_11_36_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I1012 17:11:36.456276  101069 ops.go:34] apiserver oom_adj: -16
I1012 17:11:36.456312  101069 kubeadm.go:881] duration metric: took 209.494819ms to wait for elevateKubeSystemPrivileges.
I1012 17:11:36.456561  101069 kubeadm.go:326] StartCluster complete in 19.391556318s
I1012 17:11:36.456582  101069 settings.go:123] acquiring lock: {Name:mkbc63435461720bec67b61711df8815e1f10837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1012 17:11:36.456658  101069 settings.go:131] Updating kubeconfig:  /home/joaquin/.kube/config
I1012 17:11:36.457380  101069 lock.go:35] WriteFile acquiring /home/joaquin/.kube/config: {Name:mk5029c206fd56095994fd32d1e1c65f00abe2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1012 17:11:36.457552  101069 start.go:199] Will wait wait-timeout for node ...
I1012 17:11:36.461101  101069 out.go:109] ๐Ÿ”Ž  Verifying Kubernetes components...
๐Ÿ”Ž  Verifying Kubernetes components...
I1012 17:11:36.457649  101069 addons.go:359] enableAddons start: toEnable=map[], additional=[]
I1012 17:11:36.461204  101069 addons.go:55] Setting storage-provisioner=true in profile "minikube"
I1012 17:11:36.457722  101069 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl scale deployment --replicas=1 coredns -n=kube-system
I1012 17:11:36.461226  101069 addons.go:131] Setting addon storage-provisioner=true in "minikube"
W1012 17:11:36.461233  101069 addons.go:140] addon storage-provisioner should already be in state true
I1012 17:11:36.461245  101069 host.go:65] Checking if "minikube" exists ...
I1012 17:11:36.461304  101069 addons.go:55] Setting default-storageclass=true in profile "minikube"
I1012 17:11:36.461318  101069 addons.go:274] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I1012 17:11:36.461664  101069 main.go:115] libmachine: Found binary path at /home/joaquin/.minikube/bin/docker-machine-driver-kvm2
I1012 17:11:36.461673  101069 main.go:115] libmachine: Found binary path at /home/joaquin/.minikube/bin/docker-machine-driver-kvm2
I1012 17:11:36.461701  101069 main.go:115] libmachine: Launching plugin server for driver kvm2
I1012 17:11:36.461784  101069 main.go:115] libmachine: Launching plugin server for driver kvm2
I1012 17:11:36.463191  101069 api_server.go:48] waiting for apiserver process to appear ...
I1012 17:11:36.463254  101069 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1012 17:11:36.496067  101069 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:40985
I1012 17:11:36.497919  101069 main.go:115] libmachine: () Calling .GetVersion
I1012 17:11:36.498533  101069 main.go:115] libmachine: Using API Version  1
I1012 17:11:36.498560  101069 main.go:115] libmachine: () Calling .SetConfigRaw
I1012 17:11:36.499453  101069 main.go:115] libmachine: () Calling .GetMachineName
I1012 17:11:36.499453  101069 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:39145
I1012 17:11:36.499648  101069 main.go:115] libmachine: (minikube) Calling .GetState
I1012 17:11:36.500229  101069 main.go:115] libmachine: () Calling .GetVersion
I1012 17:11:36.500845  101069 main.go:115] libmachine: Using API Version  1
I1012 17:11:36.500924  101069 main.go:115] libmachine: () Calling .SetConfigRaw
I1012 17:11:36.501499  101069 main.go:115] libmachine: () Calling .GetMachineName
I1012 17:11:36.502085  101069 main.go:115] libmachine: Found binary path at /home/joaquin/.minikube/bin/docker-machine-driver-kvm2
I1012 17:11:36.502139  101069 main.go:115] libmachine: Launching plugin server for driver kvm2
I1012 17:11:36.528014  101069 addons.go:131] Setting addon default-storageclass=true in "minikube"
W1012 17:11:36.528038  101069 addons.go:140] addon default-storageclass should already be in state true
I1012 17:11:36.528053  101069 host.go:65] Checking if "minikube" exists ...
I1012 17:11:36.528307  101069 main.go:115] libmachine: Found binary path at /home/joaquin/.minikube/bin/docker-machine-driver-kvm2
I1012 17:11:36.528338  101069 main.go:115] libmachine: Launching plugin server for driver kvm2
I1012 17:11:36.528500  101069 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:44023
I1012 17:11:36.532545  101069 main.go:115] libmachine: () Calling .GetVersion
I1012 17:11:36.533329  101069 main.go:115] libmachine: Using API Version  1
I1012 17:11:36.533414  101069 main.go:115] libmachine: () Calling .SetConfigRaw
I1012 17:11:36.534045  101069 main.go:115] libmachine: () Calling .GetMachineName
I1012 17:11:36.534368  101069 main.go:115] libmachine: (minikube) Calling .GetState
I1012 17:11:36.536582  101069 main.go:115] libmachine: (minikube) Calling .DriverName
I1012 17:11:36.537463  101069 addons.go:243] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1012 17:11:36.537514  101069 ssh_runner.go:215] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1012 17:11:36.537545  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:36.549035  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:36.549500  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:36.549675  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:36.549794  101069 sshutil.go:44] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/joaquin/.minikube/machines/minikube/id_rsa Username:docker}
I1012 17:11:36.556121  101069 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:38923
I1012 17:11:36.556836  101069 main.go:115] libmachine: () Calling .GetVersion
I1012 17:11:36.557463  101069 main.go:115] libmachine: Using API Version  1
I1012 17:11:36.557488  101069 main.go:115] libmachine: () Calling .SetConfigRaw
I1012 17:11:36.557828  101069 main.go:115] libmachine: () Calling .GetMachineName
I1012 17:11:36.558313  101069 main.go:115] libmachine: Found binary path at /home/joaquin/.minikube/bin/docker-machine-driver-kvm2
I1012 17:11:36.558351  101069 main.go:115] libmachine: Launching plugin server for driver kvm2
I1012 17:11:36.581844  101069 main.go:115] libmachine: Plugin server listening at address 127.0.0.1:34531
I1012 17:11:36.582272  101069 main.go:115] libmachine: () Calling .GetVersion
I1012 17:11:36.582758  101069 main.go:115] libmachine: Using API Version  1
I1012 17:11:36.582828  101069 main.go:115] libmachine: () Calling .SetConfigRaw
I1012 17:11:36.583168  101069 main.go:115] libmachine: () Calling .GetMachineName
I1012 17:11:36.583353  101069 main.go:115] libmachine: (minikube) Calling .GetState
I1012 17:11:36.584933  101069 main.go:115] libmachine: (minikube) Calling .DriverName
I1012 17:11:36.585175  101069 addons.go:243] installing /etc/kubernetes/addons/storageclass.yaml
I1012 17:11:36.585249  101069 ssh_runner.go:215] scp deploy/addons/storageclass/storageclass.yaml.tmpl --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1012 17:11:36.585298  101069 main.go:115] libmachine: (minikube) Calling .GetSSHHostname
I1012 17:11:36.595262  101069 main.go:115] libmachine: (minikube) Calling .GetSSHPort
I1012 17:11:36.595501  101069 main.go:115] libmachine: (minikube) Calling .GetSSHKeyPath
I1012 17:11:36.595634  101069 main.go:115] libmachine: (minikube) Calling .GetSSHUsername
I1012 17:11:36.595744  101069 sshutil.go:44] new ssh client: &{IP:192.168.39.64 Port:22 SSHKeyPath:/home/joaquin/.minikube/machines/minikube/id_rsa Username:docker}
I1012 17:11:36.631946  101069 start.go:553] successfully scaled coredns replicas to 1
I1012 17:11:36.631960  101069 api_server.go:68] duration metric: took 174.384764ms to wait for apiserver process to appear ...
I1012 17:11:36.631972  101069 api_server.go:84] waiting for apiserver healthz status ...
I1012 17:11:36.631983  101069 api_server.go:221] Checking apiserver healthz at https://192.168.39.64:8443/healthz ...
I1012 17:11:36.641922  101069 api_server.go:241] https://192.168.39.64:8443/healthz returned 200:
ok
I1012 17:11:36.642959  101069 api_server.go:137] control plane version: v1.19.2
I1012 17:11:36.643038  101069 api_server.go:127] duration metric: took 11.002231ms to wait for apiserver health ...
I1012 17:11:36.643078  101069 system_pods.go:43] waiting for kube-system pods to appear ...
I1012 17:11:36.650480  101069 system_pods.go:59] 0 kube-system pods found
I1012 17:11:36.650604  101069 retry.go:30] will retry after 263.082536ms: only 0 pod(s) have shown up
I1012 17:11:36.676404  101069 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1012 17:11:36.730560  101069 ssh_runner.go:148] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1012 17:11:36.917304  101069 system_pods.go:59] 0 kube-system pods found
I1012 17:11:36.917326  101069 retry.go:30] will retry after 381.329545ms: only 0 pod(s) have shown up
I1012 17:11:37.030446  101069 main.go:115] libmachine: Making call to close driver server
I1012 17:11:37.030464  101069 main.go:115] libmachine: Making call to close driver server
I1012 17:11:37.030471  101069 main.go:115] libmachine: (minikube) Calling .Close
I1012 17:11:37.030475  101069 main.go:115] libmachine: (minikube) Calling .Close
I1012 17:11:37.030670  101069 main.go:115] libmachine: Successfully made call to close driver server
I1012 17:11:37.030685  101069 main.go:115] libmachine: Making call to close connection to plugin binary
I1012 17:11:37.030700  101069 main.go:115] libmachine: Making call to close driver server
I1012 17:11:37.030710  101069 main.go:115] libmachine: (minikube) Calling .Close
I1012 17:11:37.030792  101069 main.go:115] libmachine: (minikube) DBG | Closing plugin on server side
I1012 17:11:37.030825  101069 main.go:115] libmachine: Successfully made call to close driver server
I1012 17:11:37.030834  101069 main.go:115] libmachine: Making call to close connection to plugin binary
I1012 17:11:37.030843  101069 main.go:115] libmachine: Making call to close driver server
I1012 17:11:37.030851  101069 main.go:115] libmachine: (minikube) Calling .Close
I1012 17:11:37.030912  101069 main.go:115] libmachine: Successfully made call to close driver server
I1012 17:11:37.030927  101069 main.go:115] libmachine: Making call to close connection to plugin binary
I1012 17:11:37.030942  101069 main.go:115] libmachine: (minikube) DBG | Closing plugin on server side
I1012 17:11:37.032541  101069 main.go:115] libmachine: (minikube) DBG | Closing plugin on server side
I1012 17:11:37.032582  101069 main.go:115] libmachine: Successfully made call to close driver server
I1012 17:11:37.032591  101069 main.go:115] libmachine: Making call to close connection to plugin binary
I1012 17:11:37.032602  101069 main.go:115] libmachine: Making call to close driver server
I1012 17:11:37.032611  101069 main.go:115] libmachine: (minikube) Calling .Close
I1012 17:11:37.035028  101069 main.go:115] libmachine: (minikube) DBG | Closing plugin on server side
I1012 17:11:37.035028  101069 main.go:115] libmachine: Successfully made call to close driver server
I1012 17:11:37.035077  101069 main.go:115] libmachine: Making call to close connection to plugin binary
I1012 17:11:37.044873  101069 out.go:109] ๐ŸŒŸ  Enabled addons: default-storageclass, storage-provisioner
๐ŸŒŸ  Enabled addons: default-storageclass, storage-provisioner
I1012 17:11:37.044901  101069 addons.go:361] enableAddons completed in 587.256086ms
I1012 17:11:37.308044  101069 system_pods.go:59] 1 kube-system pods found
I1012 17:11:37.308131  101069 system_pods.go:61] "storage-provisioner" [af9391a0-70bb-4aa6-9cd3-77a128ca7818] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I1012 17:11:37.308173  101069 retry.go:30] will retry after 422.765636ms: only 1 pod(s) have shown up
I1012 17:11:37.740286  101069 system_pods.go:59] 1 kube-system pods found
I1012 17:11:37.740389  101069 system_pods.go:61] "storage-provisioner" [af9391a0-70bb-4aa6-9cd3-77a128ca7818] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I1012 17:11:37.740447  101069 retry.go:30] will retry after 473.074753ms: only 1 pod(s) have shown up
I1012 17:11:38.235183  101069 system_pods.go:59] 1 kube-system pods found
I1012 17:11:38.235248  101069 system_pods.go:61] "storage-provisioner" [af9391a0-70bb-4aa6-9cd3-77a128ca7818] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I1012 17:11:38.235289  101069 retry.go:30] will retry after 587.352751ms: only 1 pod(s) have shown up
I1012 17:11:38.839744  101069 system_pods.go:59] 1 kube-system pods found
I1012 17:11:38.839850  101069 system_pods.go:61] "storage-provisioner" [af9391a0-70bb-4aa6-9cd3-77a128ca7818] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I1012 17:11:38.839907  101069 retry.go:30] will retry after 834.206799ms: only 1 pod(s) have shown up
I1012 17:11:39.679403  101069 system_pods.go:59] 1 kube-system pods found
I1012 17:11:39.679470  101069 system_pods.go:61] "storage-provisioner" [af9391a0-70bb-4aa6-9cd3-77a128ca7818] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I1012 17:11:39.679508  101069 retry.go:30] will retry after 746.553905ms: only 1 pod(s) have shown up
I1012 17:11:40.437698  101069 system_pods.go:59] 1 kube-system pods found
I1012 17:11:40.437780  101069 system_pods.go:61] "storage-provisioner" [af9391a0-70bb-4aa6-9cd3-77a128ca7818] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I1012 17:11:40.437812  101069 retry.go:30] will retry after 987.362415ms: only 1 pod(s) have shown up
I1012 17:11:41.443030  101069 system_pods.go:59] 1 kube-system pods found
I1012 17:11:41.443121  101069 system_pods.go:61] "storage-provisioner" [af9391a0-70bb-4aa6-9cd3-77a128ca7818] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I1012 17:11:41.443190  101069 retry.go:30] will retry after 1.189835008s: only 1 pod(s) have shown up
I1012 17:11:42.635856  101069 system_pods.go:59] 3 kube-system pods found
I1012 17:11:42.635967  101069 system_pods.go:61] "coredns-f9fd979d6-tzb5n" [f0f431d0-5fb4-45db-8878-769c2a94c9f6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I1012 17:11:42.635982  101069 system_pods.go:61] "kube-proxy-vphc2" [92b141df-cec1-48b8-8f24-37a4f989576e] Pending
I1012 17:11:42.635995  101069 system_pods.go:61] "storage-provisioner" [af9391a0-70bb-4aa6-9cd3-77a128ca7818] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
I1012 17:11:42.636014  101069 system_pods.go:74] duration metric: took 5.99292537s to wait for pod list to return data ...
I1012 17:11:42.636038  101069 kubeadm.go:465] duration metric: took 6.178463966s to wait for : map[apiserver:true system_pods:true] ...
I1012 17:11:42.636062  101069 node_conditions.go:101] verifying NodePressure condition ...
I1012 17:11:42.639572  101069 node_conditions.go:121] node storage ephemeral capacity is 16954224Ki
I1012 17:11:42.639619  101069 node_conditions.go:122] node cpu capacity is 2
I1012 17:11:42.639637  101069 node_conditions.go:104] duration metric: took 3.562255ms to run NodePressure ...
I1012 17:11:42.639668  101069 start.go:204] waiting for startup goroutines ...
I1012 17:11:42.656519  101069 out.go:109] ๐Ÿ„  Done! kubectl is now configured to use "minikube" by default
๐Ÿ„  Done! kubectl is now configured to use "minikube" by default
E1012 17:11:42.656583  101069 start.go:240] kubectl info: exec: exit status 1

Full output of minikube start command used, if not already included:

(see above)

Optional: Full output of minikube logs command:

==> Docker <==
-- Logs begin at Tue 2020-10-13 00:11:02 UTC, end at Tue 2020-10-13 00:13:18 UTC. --
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303170780Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303182072Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303189921Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303197432Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303251096Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303280550Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303644532Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303668034Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303696759Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303706420Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303714389Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303721780Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303728845Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303736642Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303744935Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303752321Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303761474Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303785052Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303794003Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303801315Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303808517Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303899679Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303933281Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.303943601Z" level=info msg="containerd successfully booted in 0.002824s"
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.315547674Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.315680734Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.315761590Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.315835968Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.316458973Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.316475928Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.316491980Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.316499908Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.764514323Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.764547621Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.764557314Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.764580816Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.764588840Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.764596745Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.764860025Z" level=info msg="Loading containers: start."
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.894617305Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.945035338Z" level=info msg="Loading containers: done."
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.981057672Z" level=info msg="Docker daemon" commit=48a66213fe graphdriver(s)=overlay2 version=19.03.12
Oct 13 00:11:15 minikube dockerd[2214]: time="2020-10-13T00:11:15.981498780Z" level=info msg="Daemon has completed initialization"
Oct 13 00:11:16 minikube dockerd[2214]: time="2020-10-13T00:11:16.045348338Z" level=info msg="API listen on /var/run/docker.sock"
Oct 13 00:11:16 minikube systemd[1]: Started Docker Application Container Engine.
Oct 13 00:11:16 minikube dockerd[2214]: time="2020-10-13T00:11:16.048920709Z" level=info msg="API listen on [::]:2376"
Oct 13 00:11:27 minikube dockerd[2214]: time="2020-10-13T00:11:27.373119233Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a2e604133abdd014bcca3d2e85db588a04cc58287c5ffee1aa9229182fcdf836/shim.sock" debug=false pid=3082
Oct 13 00:11:27 minikube dockerd[2214]: time="2020-10-13T00:11:27.408949030Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/288a2bab9264306ba5a7ea5c805183b62b2d4c2ffeca245984451708459d9785/shim.sock" debug=false pid=3092
Oct 13 00:11:27 minikube dockerd[2214]: time="2020-10-13T00:11:27.415053285Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6cd7dd05bd2681a4608a8e08abb209dfbab8dcf9160e7596d17405ef30dad43f/shim.sock" debug=false pid=3105
Oct 13 00:11:27 minikube dockerd[2214]: time="2020-10-13T00:11:27.422984216Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5fc80adbb26bff6fb472699bc508a4e994c9dec7078d01b919419d4a5a7bba0a/shim.sock" debug=false pid=3118
Oct 13 00:11:27 minikube dockerd[2214]: time="2020-10-13T00:11:27.936552926Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c5db0fa4880a8f3efd5dc528863b8675b85b5d915429cefaea3ae874266c8c69/shim.sock" debug=false pid=3229
Oct 13 00:11:27 minikube dockerd[2214]: time="2020-10-13T00:11:27.964739989Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f37f41275a7ac82539f5b6e6e2e27e5422eeb76915a7d25f3762499d92af7d0e/shim.sock" debug=false pid=3248
Oct 13 00:11:28 minikube dockerd[2214]: time="2020-10-13T00:11:28.054460364Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f37a7b1cb35f22dced22bd7a61bf181fe6d3c46c7004a19250377c969d5ca058/shim.sock" debug=false pid=3281
Oct 13 00:11:28 minikube dockerd[2214]: time="2020-10-13T00:11:28.268917970Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6107c4403904bffc4d0dc7152b265de7e6bb96874901053c60022206f73e1edb/shim.sock" debug=false pid=3391
Oct 13 00:11:43 minikube dockerd[2214]: time="2020-10-13T00:11:43.698672924Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a0dc3b54bcbb7ff759849ea9a79fdb9dfa57e636eae6edba95411337f63e6f71/shim.sock" debug=false pid=3997
Oct 13 00:11:44 minikube dockerd[2214]: time="2020-10-13T00:11:44.098945210Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0da1c0edd95b2d3b19ac43c884be4c808dca54e8311d00b5f32eb2d97fd9fec4/shim.sock" debug=false pid=4038
Oct 13 00:11:53 minikube dockerd[2214]: time="2020-10-13T00:11:53.294718799Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6d80bff15649267bef89d1055fb78b725045304e5c7b86c5300846c89e44ae32/shim.sock" debug=false pid=4162
Oct 13 00:11:53 minikube dockerd[2214]: time="2020-10-13T00:11:53.789242148Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b4f7beb7f7a781837f3d85a385619af95bdd90508d371f73551558acf6046891/shim.sock" debug=false pid=4196
Oct 13 00:11:59 minikube dockerd[2214]: time="2020-10-13T00:11:59.015035663Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9d393222f8dd5f4bd470dd8613b0e08e179985257ee3e03c1832014c90546f82/shim.sock" debug=false pid=4261
Oct 13 00:11:59 minikube dockerd[2214]: time="2020-10-13T00:11:59.476965189Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e6e4d41d6229acd1b19e2c1ef9523323bb92ad5cbfc1fd04a3fbe7e69254ba47/shim.sock" debug=false pid=4314

==> container status <==
CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
e6e4d41d6229a       bfe3a36ebd252       About a minute ago   Running             coredns                   0                   9d393222f8dd5
b4f7beb7f7a78       bad58561c4be7       About a minute ago   Running             storage-provisioner       0                   6d80bff156492
0da1c0edd95b2       d373dd5a8593a       About a minute ago   Running             kube-proxy                0                   a0dc3b54bcbb7
6107c4403904b       0369cf4303ffd       About a minute ago   Running             etcd                      0                   a2e604133abdd
f37a7b1cb35f2       2f32d66b884f8       About a minute ago   Running             kube-scheduler            0                   5fc80adbb26bf
f37f41275a7ac       8603821e1a7a5       About a minute ago   Running             kube-controller-manager   0                   6cd7dd05bd268
c5db0fa4880a8       607331163122e       About a minute ago   Running             kube-apiserver            0                   288a2bab92643

==> coredns [e6e4d41d6229] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d

==> describe nodes <==
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=1fd1f67f338cbab4b3e5a6e4c71c551f522ca138-dirty
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2020_10_12T17_11_36_0700
                    minikube.k8s.io/version=v1.13.1
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 13 Oct 2020 00:11:32 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Tue, 13 Oct 2020 00:13:12 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Tue, 13 Oct 2020 00:11:52 +0000   Tue, 13 Oct 2020 00:11:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Tue, 13 Oct 2020 00:11:52 +0000   Tue, 13 Oct 2020 00:11:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Tue, 13 Oct 2020 00:11:52 +0000   Tue, 13 Oct 2020 00:11:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Tue, 13 Oct 2020 00:11:52 +0000   Tue, 13 Oct 2020 00:11:52 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.39.64
  Hostname:    minikube
Capacity:
  cpu:                2
  ephemeral-storage:  16954224Ki
  hugepages-2Mi:      0
  memory:             5671428Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  16954224Ki
  hugepages-2Mi:      0
  memory:             5671428Ki
  pods:               110
System Info:
  Machine ID:                 dd913fa842ab49f08dbc85fb158a5761
  System UUID:                dd913fa8-42ab-49f0-8dbc-85fb158a5761
  Boot ID:                    61a0ceb6-d4c6-48f5-8df9-a4fc5d6aebe1
  Kernel Version:             4.19.114
  OS Image:                   Buildroot 2019.02.11
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.12
  Kubelet Version:            v1.19.2
  Kube-Proxy Version:         v1.19.2
Non-terminated Pods:          (7 in total)
  Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-f9fd979d6-tzb5n             100m (5%)     0 (0%)      70Mi (1%)        170Mi (3%)     97s
  kube-system                 etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
  kube-system                 kube-apiserver-minikube             250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
  kube-system                 kube-controller-manager-minikube    200m (10%)    0 (0%)      0 (0%)           0 (0%)         97s
  kube-system                 kube-proxy-vphc2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
  kube-system                 kube-scheduler-minikube             100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
  kube-system                 storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                650m (32%)  0 (0%)
  memory             70Mi (1%)   170Mi (3%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                  From        Message
  ----    ------                   ----                 ----        -------
  Normal  NodeHasSufficientMemory  113s (x5 over 113s)  kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    113s (x5 over 113s)  kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     113s (x5 over 113s)  kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal  Starting                 97s                  kubelet     Starting kubelet.
  Normal  NodeHasSufficientMemory  97s                  kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    97s                  kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     97s                  kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  97s                  kubelet     Updated Node Allocatable limit across pods
  Normal  Starting                 95s                  kube-proxy  Starting kube-proxy.
  Normal  NodeReady                87s                  kubelet     Node minikube status is now: NodeReady

==> dmesg <==
[Oct13 00:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
[  +0.054607] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[Oct13 00:11] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.841462] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[  +0.007605] systemd-fstab-generator[1151]: Ignoring "noauto" for root device
[  +0.001488] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000003] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +1.187009] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +0.154878] vboxguest: loading out-of-tree module taints kernel.
[  +0.003324] vboxguest: PCI device not found, probably running on physical hardware.
[  +5.352607] systemd-fstab-generator[2004]: Ignoring "noauto" for root device
[  +0.082130] systemd-fstab-generator[2014]: Ignoring "noauto" for root device
[  +4.865800] systemd-fstab-generator[2202]: Ignoring "noauto" for root device
[  +1.594632] kauditd_printk_skb: 65 callbacks suppressed
[  +0.368034] systemd-fstab-generator[2361]: Ignoring "noauto" for root device
[  +3.478272] systemd-fstab-generator[2608]: Ignoring "noauto" for root device
[  +6.677048] kauditd_printk_skb: 107 callbacks suppressed
[  +9.564743] systemd-fstab-generator[3728]: Ignoring "noauto" for root device
[  +8.413147] kauditd_printk_skb: 41 callbacks suppressed
[ +14.654349] kauditd_printk_skb: 38 callbacks suppressed
[Oct13 00:13] NFSD: Unable to end grace period: -110

==> etcd [6107c4403904] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-10-13 00:11:29.010219 I | etcdmain: etcd Version: 3.4.13
2020-10-13 00:11:29.010309 I | etcdmain: Git SHA: ae9734ed2
2020-10-13 00:11:29.010385 I | etcdmain: Go Version: go1.12.17
2020-10-13 00:11:29.010430 I | etcdmain: Go OS/Arch: linux/amd64
2020-10-13 00:11:29.010483 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-10-13 00:11:29.010708 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-10-13 00:11:29.011279 I | embed: name = minikube
2020-10-13 00:11:29.011339 I | embed: data dir = /var/lib/minikube/etcd
2020-10-13 00:11:29.011391 I | embed: member dir = /var/lib/minikube/etcd/member
2020-10-13 00:11:29.011444 I | embed: heartbeat = 100ms
2020-10-13 00:11:29.011478 I | embed: election = 1000ms
2020-10-13 00:11:29.011528 I | embed: snapshot count = 10000
2020-10-13 00:11:29.011570 I | embed: advertise client URLs = https://192.168.39.64:2379
2020-10-13 00:11:29.041570 I | etcdserver: starting member 7dcc3547d111063c in cluster c3619ef1effce12d
raft2020/10/13 00:11:29 INFO: 7dcc3547d111063c switched to configuration voters=()
raft2020/10/13 00:11:29 INFO: 7dcc3547d111063c became follower at term 0
raft2020/10/13 00:11:29 INFO: newRaft 7dcc3547d111063c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/10/13 00:11:29 INFO: 7dcc3547d111063c became follower at term 1
raft2020/10/13 00:11:29 INFO: 7dcc3547d111063c switched to configuration voters=(9064678732556469820)
2020-10-13 00:11:29.048400 W | auth: simple token is not cryptographically signed
2020-10-13 00:11:29.057736 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
2020-10-13 00:11:29.058999 I | etcdserver: 7dcc3547d111063c as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-10-13 00:11:29.061138 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
raft2020/10/13 00:11:29 INFO: 7dcc3547d111063c switched to configuration voters=(9064678732556469820)
2020-10-13 00:11:29.061380 I | etcdserver/membership: added member 7dcc3547d111063c [https://192.168.39.64:2380] to cluster c3619ef1effce12d
2020-10-13 00:11:29.061513 I | embed: listening for peers on 192.168.39.64:2380
2020-10-13 00:11:29.061736 I | embed: listening for metrics on http://127.0.0.1:2381
raft2020/10/13 00:11:29 INFO: 7dcc3547d111063c is starting a new election at term 1
raft2020/10/13 00:11:29 INFO: 7dcc3547d111063c became candidate at term 2
raft2020/10/13 00:11:29 INFO: 7dcc3547d111063c received MsgVoteResp from 7dcc3547d111063c at term 2
raft2020/10/13 00:11:29 INFO: 7dcc3547d111063c became leader at term 2
raft2020/10/13 00:11:29 INFO: raft.node: 7dcc3547d111063c elected leader 7dcc3547d111063c at term 2
2020-10-13 00:11:29.944244 I | etcdserver: setting up the initial cluster version to 3.4
2020-10-13 00:11:29.945329 I | embed: ready to serve client requests
2020-10-13 00:11:29.946050 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.39.64:2379]} to cluster c3619ef1effce12d
2020-10-13 00:11:29.946164 I | embed: ready to serve client requests
2020-10-13 00:11:29.948873 I | embed: serving client requests on 192.168.39.64:2379
2020-10-13 00:11:29.955548 I | embed: serving client requests on 127.0.0.1:2379
2020-10-13 00:11:30.000969 N | etcdserver/membership: set the initial cluster version to 3.4
2020-10-13 00:11:30.001464 I | etcdserver/api: enabled capabilities for version 3.4
2020-10-13 00:11:51.722113 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 00:12:00.606208 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 00:12:10.604330 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 00:12:20.607153 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 00:12:30.606849 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 00:12:40.612319 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 00:12:50.606791 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 00:13:00.604445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2020-10-13 00:13:10.603913 I | etcdserver/api/etcdhttp: /health OK (status code 200)

==> kernel <==
 00:13:19 up 2 min,  0 users,  load average: 1.03, 0.47, 0.18
Linux minikube 4.19.114 #1 SMP Fri Sep 18 16:40:03 PDT 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.11"

==> kube-apiserver [c5db0fa4880a] <==
I1013 00:11:30.942079       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1013 00:11:30.942201       1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I1013 00:11:30.944331       1 client.go:360] parsed scheme: "endpoint"
I1013 00:11:30.944457       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1013 00:11:30.950318       1 client.go:360] parsed scheme: "endpoint"
I1013 00:11:30.950340       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]
I1013 00:11:32.674276       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I1013 00:11:32.674319       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1013 00:11:32.674708       1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I1013 00:11:32.675277       1 secure_serving.go:197] Serving securely on [::]:8443
I1013 00:11:32.675507       1 customresource_discovery_controller.go:209] Starting DiscoveryController
I1013 00:11:32.675363       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1013 00:11:32.675627       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1013 00:11:32.675701       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1013 00:11:32.675794       1 autoregister_controller.go:141] Starting autoregister controller
I1013 00:11:32.675882       1 cache.go:32] Waiting for caches to sync for autoregister controller
I1013 00:11:32.675957       1 controller.go:83] Starting OpenAPI AggregationController
I1013 00:11:32.677019       1 available_controller.go:404] Starting AvailableConditionController
I1013 00:11:32.677109       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1013 00:11:32.677226       1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
I1013 00:11:32.677304       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1013 00:11:32.677377       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I1013 00:11:32.679062       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1013 00:11:32.679080       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I1013 00:11:32.679231       1 controller.go:86] Starting OpenAPI controller
I1013 00:11:32.679251       1 naming_controller.go:291] Starting NamingConditionController
I1013 00:11:32.679265       1 establishing_controller.go:76] Starting EstablishingController
I1013 00:11:32.679283       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I1013 00:11:32.679296       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1013 00:11:32.679316       1 crd_finalizer.go:266] Starting CRDFinalizer
I1013 00:11:32.714075       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I1013 00:11:32.714116       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
E1013 00:11:32.723736       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.64, ResourceVersion: 0, AdditionalErrorMsg: 
I1013 00:11:32.775899       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1013 00:11:32.776024       1 cache.go:39] Caches are synced for autoregister controller
I1013 00:11:32.777217       1 cache.go:39] Caches are synced for AvailableConditionController controller
I1013 00:11:32.777480       1 shared_informer.go:247] Caches are synced for crd-autoregister 
I1013 00:11:32.779305       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I1013 00:11:33.674241       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1013 00:11:33.674280       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1013 00:11:33.685090       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I1013 00:11:33.730230       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I1013 00:11:33.730435       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I1013 00:11:34.381944       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1013 00:11:34.437426       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1013 00:11:34.613686       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.39.64]
I1013 00:11:34.621943       1 controller.go:606] quota admission added evaluator for: endpoints
I1013 00:11:34.637292       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1013 00:11:35.148171       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1013 00:11:36.064320       1 controller.go:606] quota admission added evaluator for: deployments.apps
I1013 00:11:36.212575       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1013 00:11:42.230635       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1013 00:11:42.324941       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1013 00:11:42.485644       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1013 00:12:08.435470       1 client.go:360] parsed scheme: "passthrough"
I1013 00:12:08.436324       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 00:12:08.437099       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1013 00:12:44.568684       1 client.go:360] parsed scheme: "passthrough"
I1013 00:12:44.568767       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I1013 00:12:44.568791       1 clientconn.go:948] ClientConn switching balancer to "pick_first"

==> kube-controller-manager [f37f41275a7a] <==
I1013 00:11:41.444749       1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client"
I1013 00:11:41.444830       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
I1013 00:11:41.445577       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
I1013 00:11:41.447847       1 controllermanager.go:549] Started "csrsigning"
I1013 00:11:41.448474       1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown"
I1013 00:11:41.448595       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
I1013 00:11:41.448755       1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key
I1013 00:11:41.576339       1 controllermanager.go:549] Started "csrcleaner"
I1013 00:11:41.576452       1 cleaner.go:83] Starting CSR cleaner controller
E1013 00:11:41.823056       1 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1013 00:11:41.823433       1 controllermanager.go:541] Skipping "service"
I1013 00:11:42.071094       1 controllermanager.go:549] Started "persistentvolume-binder"
I1013 00:11:42.071396       1 pv_controller_base.go:303] Starting persistent volume controller
I1013 00:11:42.071487       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I1013 00:11:42.074438       1 shared_informer.go:240] Waiting for caches to sync for resource quota
I1013 00:11:42.083941       1 shared_informer.go:247] Caches are synced for namespace 
W1013 00:11:42.101766       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1013 00:11:42.125194       1 shared_informer.go:247] Caches are synced for HPA 
I1013 00:11:42.126206       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
I1013 00:11:42.129197       1 shared_informer.go:247] Caches are synced for service account 
I1013 00:11:42.133664       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
I1013 00:11:42.133841       1 shared_informer.go:247] Caches are synced for endpoint_slice 
I1013 00:11:42.134209       1 shared_informer.go:247] Caches are synced for taint 
I1013 00:11:42.134482       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
W1013 00:11:42.134652       1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1013 00:11:42.134778       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I1013 00:11:42.135568       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
I1013 00:11:42.136221       1 taint_manager.go:187] Starting NoExecuteTaintManager
I1013 00:11:42.138823       1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I1013 00:11:42.143123       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
I1013 00:11:42.144143       1 shared_informer.go:247] Caches are synced for job 
I1013 00:11:42.145031       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
I1013 00:11:42.148338       1 shared_informer.go:247] Caches are synced for PVC protection 
I1013 00:11:42.148720       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
I1013 00:11:42.171673       1 shared_informer.go:247] Caches are synced for persistent volume 
I1013 00:11:42.171908       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
I1013 00:11:42.172297       1 shared_informer.go:247] Caches are synced for expand 
I1013 00:11:42.173143       1 shared_informer.go:247] Caches are synced for TTL 
I1013 00:11:42.174004       1 shared_informer.go:247] Caches are synced for GC 
I1013 00:11:42.174194       1 shared_informer.go:247] Caches are synced for endpoint 
I1013 00:11:42.181505       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
I1013 00:11:42.200485       1 shared_informer.go:247] Caches are synced for PV protection 
I1013 00:11:42.202910       1 shared_informer.go:247] Caches are synced for stateful set 
I1013 00:11:42.218335       1 shared_informer.go:247] Caches are synced for ReplicationController 
I1013 00:11:42.222575       1 shared_informer.go:247] Caches are synced for daemon sets 
I1013 00:11:42.230706       1 shared_informer.go:247] Caches are synced for attach detach 
I1013 00:11:42.249885       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vphc2"
I1013 00:11:42.271464       1 shared_informer.go:247] Caches are synced for ReplicaSet 
I1013 00:11:42.276621       1 shared_informer.go:247] Caches are synced for disruption 
I1013 00:11:42.276642       1 disruption.go:339] Sending events to api server.
I1013 00:11:42.321342       1 shared_informer.go:247] Caches are synced for deployment 
I1013 00:11:42.329506       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1"
I1013 00:11:42.352806       1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-tzb5n"
I1013 00:11:42.374662       1 shared_informer.go:247] Caches are synced for resource quota 
I1013 00:11:42.382228       1 shared_informer.go:247] Caches are synced for resource quota 
I1013 00:11:42.453978       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1013 00:11:42.754162       1 shared_informer.go:247] Caches are synced for garbage collector 
I1013 00:11:42.770665       1 shared_informer.go:247] Caches are synced for garbage collector 
I1013 00:11:42.770822       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1013 00:11:57.136340       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.

==> kube-proxy [0da1c0edd95b] <==
I1013 00:11:44.220008       1 node.go:136] Successfully retrieved node IP: 192.168.39.64
I1013 00:11:44.220077       1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.39.64), assume IPv4 operation
W1013 00:11:44.288348       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
I1013 00:11:44.288500       1 server_others.go:186] Using iptables Proxier.
W1013 00:11:44.288511       1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined
I1013 00:11:44.288536       1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local
I1013 00:11:44.288809       1 server.go:650] Version: v1.19.2
I1013 00:11:44.289429       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I1013 00:11:44.289462       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1013 00:11:44.289522       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1013 00:11:44.289567       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1013 00:11:44.290058       1 config.go:315] Starting service config controller
I1013 00:11:44.290073       1 shared_informer.go:240] Waiting for caches to sync for service config
I1013 00:11:44.290094       1 config.go:224] Starting endpoint slice config controller
I1013 00:11:44.290102       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1013 00:11:44.390541       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I1013 00:11:44.390683       1 shared_informer.go:247] Caches are synced for service config 

==> kube-scheduler [f37a7b1cb35f] <==
I1013 00:11:28.336102       1 registry.go:173] Registering SelectorSpread plugin
I1013 00:11:28.336145       1 registry.go:173] Registering SelectorSpread plugin
I1013 00:11:28.911705       1 serving.go:331] Generated self-signed cert in-memory
W1013 00:11:32.743710       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1013 00:11:32.743728       1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1013 00:11:32.743739       1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous.
W1013 00:11:32.743744       1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1013 00:11:32.764759       1 registry.go:173] Registering SelectorSpread plugin
I1013 00:11:32.764912       1 registry.go:173] Registering SelectorSpread plugin
I1013 00:11:32.767600       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1013 00:11:32.767694       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1013 00:11:32.767718       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1013 00:11:32.767635       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
E1013 00:11:32.773892       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1013 00:11:32.774113       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1013 00:11:32.774125       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1013 00:11:32.774342       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1013 00:11:32.774436       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1013 00:11:32.774508       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1013 00:11:32.774537       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1013 00:11:32.774636       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1013 00:11:32.774741       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1013 00:11:32.774787       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1013 00:11:32.774834       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1013 00:11:32.774880       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1013 00:11:32.774984       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1013 00:11:33.594955       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1013 00:11:33.615330       1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1013 00:11:33.636742       1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1013 00:11:33.644669       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1013 00:11:33.861643       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1013 00:11:33.872651       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1013 00:11:33.894281       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1013 00:11:33.969492       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1013 00:11:33.982220       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1013 00:11:34.068963       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1013 00:11:34.127758       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1013 00:11:34.147195       1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I1013 00:11:36.667903       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 

==> kubelet <==
-- Logs begin at Tue 2020-10-13 00:11:02 UTC, end at Tue 2020-10-13 00:13:20 UTC. --
Oct 13 00:11:36 minikube kubelet[3737]: I1013 00:11:36.303654    3737 kubelet.go:273] Watching apiserver
Oct 13 00:11:42 minikube kubelet[3737]: E1013 00:11:42.430404    3737 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Oct 13 00:11:42 minikube kubelet[3737]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.442513    3737 kuberuntime_manager.go:214] Container runtime docker initialized, version: 19.03.12, apiVersion: 1.40.0
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.442881    3737 server.go:1147] Started kubelet
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.446439    3737 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.447940    3737 volume_manager.go:265] Starting Kubelet Volume Manager
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.460432    3737 server.go:152] Starting to listen on 0.0.0.0:10250
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.461540    3737 server.go:424] Adding debug handlers to kubelet server.
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.450329    3737 desired_state_of_world_populator.go:139] Desired state populator starts to run
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.511829    3737 status_manager.go:158] Starting to sync pod status with apiserver
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.511866    3737 kubelet.go:1741] Starting kubelet main sync loop.
Oct 13 00:11:42 minikube kubelet[3737]: E1013 00:11:42.511911    3737 kubelet.go:1765] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.579471    3737 kubelet_node_status.go:70] Attempting to register node minikube
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.599795    3737 kubelet_node_status.go:108] Node minikube was previously registered
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.599949    3737 kubelet_node_status.go:73] Successfully registered node minikube
Oct 13 00:11:42 minikube kubelet[3737]: E1013 00:11:42.612652    3737 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.651737    3737 cpu_manager.go:184] [cpumanager] starting with none policy
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.651753    3737 cpu_manager.go:185] [cpumanager] reconciling every 10s
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.651774    3737 state_mem.go:36] [cpumanager] initializing new in-memory state store
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.651934    3737 state_mem.go:88] [cpumanager] updated default cpuset: ""
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.651945    3737 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.651963    3737 policy_none.go:43] [cpumanager] none policy: Start
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.653032    3737 plugin_manager.go:114] Starting Kubelet Plugin Manager
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.813114    3737 topology_manager.go:233] [topologymanager] Topology Admit Handler
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.826803    3737 topology_manager.go:233] [topologymanager] Topology Admit Handler
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.834261    3737 topology_manager.go:233] [topologymanager] Topology Admit Handler
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.844007    3737 topology_manager.go:233] [topologymanager] Topology Admit Handler
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.853864    3737 topology_manager.go:233] [topologymanager] Topology Admit Handler
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.874071    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/fccfb6ca22cdf1270a7d17c472216ebc-ca-certs") pod "kube-apiserver-minikube" (UID: "fccfb6ca22cdf1270a7d17c472216ebc")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.874209    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/fccfb6ca22cdf1270a7d17c472216ebc-k8s-certs") pod "kube-apiserver-minikube" (UID: "fccfb6ca22cdf1270a7d17c472216ebc")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.874287    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/fccfb6ca22cdf1270a7d17c472216ebc-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "fccfb6ca22cdf1270a7d17c472216ebc")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.874441    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/d421d4b6a0d0e042995d6d88d0637437-ca-certs") pod "kube-controller-manager-minikube" (UID: "d421d4b6a0d0e042995d6d88d0637437")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.874488    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/d421d4b6a0d0e042995d6d88d0637437-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "d421d4b6a0d0e042995d6d88d0637437")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.874939    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/a64ea6c1de3b86ac595118eab4d95046-etcd-certs") pod "etcd-minikube" (UID: "a64ea6c1de3b86ac595118eab4d95046")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.875101    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/a64ea6c1de3b86ac595118eab4d95046-etcd-data") pod "etcd-minikube" (UID: "a64ea6c1de3b86ac595118eab4d95046")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.975880    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/92b141df-cec1-48b8-8f24-37a4f989576e-kube-proxy") pod "kube-proxy-vphc2" (UID: "92b141df-cec1-48b8-8f24-37a4f989576e")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.976125    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/92b141df-cec1-48b8-8f24-37a4f989576e-lib-modules") pod "kube-proxy-vphc2" (UID: "92b141df-cec1-48b8-8f24-37a4f989576e")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.976335    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/d421d4b6a0d0e042995d6d88d0637437-k8s-certs") pod "kube-controller-manager-minikube" (UID: "d421d4b6a0d0e042995d6d88d0637437")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.976487    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-rws7f" (UniqueName: "kubernetes.io/secret/92b141df-cec1-48b8-8f24-37a4f989576e-kube-proxy-token-rws7f") pod "kube-proxy-vphc2" (UID: "92b141df-cec1-48b8-8f24-37a4f989576e")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.976881    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/d421d4b6a0d0e042995d6d88d0637437-kubeconfig") pod "kube-controller-manager-minikube" (UID: "d421d4b6a0d0e042995d6d88d0637437")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.976979    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/92b141df-cec1-48b8-8f24-37a4f989576e-xtables-lock") pod "kube-proxy-vphc2" (UID: "92b141df-cec1-48b8-8f24-37a4f989576e")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.977146    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/d421d4b6a0d0e042995d6d88d0637437-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "d421d4b6a0d0e042995d6d88d0637437")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.977237    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ff7d12f9e4f14e202a85a7c5534a3129-kubeconfig") pod "kube-scheduler-minikube" (UID: "ff7d12f9e4f14e202a85a7c5534a3129")
Oct 13 00:11:42 minikube kubelet[3737]: I1013 00:11:42.977302    3737 reconciler.go:157] Reconciler: start to sync state
Oct 13 00:11:43 minikube kubelet[3737]: E1013 00:11:43.612633    3737 kuberuntime_manager.go:940] PodSandboxStatus of sandbox "a0dc3b54bcbb7ff759849ea9a79fdb9dfa57e636eae6edba95411337f63e6f71" for pod "kube-proxy-vphc2_kube-system(92b141df-cec1-48b8-8f24-37a4f989576e)" error: rpc error: code = Unknown desc = Error: No such container: a0dc3b54bcbb7ff759849ea9a79fdb9dfa57e636eae6edba95411337f63e6f71
Oct 13 00:11:52 minikube kubelet[3737]: I1013 00:11:52.772820    3737 topology_manager.go:233] [topologymanager] Topology Admit Handler
Oct 13 00:11:52 minikube kubelet[3737]: I1013 00:11:52.922916    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-wfzlp" (UniqueName: "kubernetes.io/secret/af9391a0-70bb-4aa6-9cd3-77a128ca7818-storage-provisioner-token-wfzlp") pod "storage-provisioner" (UID: "af9391a0-70bb-4aa6-9cd3-77a128ca7818")
Oct 13 00:11:52 minikube kubelet[3737]: I1013 00:11:52.923910    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/af9391a0-70bb-4aa6-9cd3-77a128ca7818-tmp") pod "storage-provisioner" (UID: "af9391a0-70bb-4aa6-9cd3-77a128ca7818")
Oct 13 00:11:56 minikube kubelet[3737]: I1013 00:11:56.898633    3737 topology_manager.go:233] [topologymanager] Topology Admit Handler
Oct 13 00:11:56 minikube kubelet[3737]: E1013 00:11:56.907056    3737 reflector.go:127] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object
Oct 13 00:11:56 minikube kubelet[3737]: E1013 00:11:56.907553    3737 reflector.go:127] object-"kube-system"/"coredns-token-c2rl8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-c2rl8" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'minikube' and this object
Oct 13 00:11:57 minikube kubelet[3737]: I1013 00:11:57.043822    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-c2rl8" (UniqueName: "kubernetes.io/secret/f0f431d0-5fb4-45db-8878-769c2a94c9f6-coredns-token-c2rl8") pod "coredns-f9fd979d6-tzb5n" (UID: "f0f431d0-5fb4-45db-8878-769c2a94c9f6")
Oct 13 00:11:57 minikube kubelet[3737]: I1013 00:11:57.044396    3737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f0f431d0-5fb4-45db-8878-769c2a94c9f6-config-volume") pod "coredns-f9fd979d6-tzb5n" (UID: "f0f431d0-5fb4-45db-8878-769c2a94c9f6")
Oct 13 00:11:58 minikube kubelet[3737]: E1013 00:11:58.145601    3737 secret.go:195] Couldn't get secret kube-system/coredns-token-c2rl8: failed to sync secret cache: timed out waiting for the condition
Oct 13 00:11:58 minikube kubelet[3737]: E1013 00:11:58.146707    3737 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/f0f431d0-5fb4-45db-8878-769c2a94c9f6-coredns-token-c2rl8 podName:f0f431d0-5fb4-45db-8878-769c2a94c9f6 nodeName:}" failed. No retries permitted until 2020-10-13 00:11:58.646513989 +0000 UTC m=+22.644431668 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-c2rl8\" (UniqueName: \"kubernetes.io/secret/f0f431d0-5fb4-45db-8878-769c2a94c9f6-coredns-token-c2rl8\") pod \"coredns-f9fd979d6-tzb5n\" (UID: \"f0f431d0-5fb4-45db-8878-769c2a94c9f6\") : failed to sync secret cache: timed out waiting for the condition"
Oct 13 00:11:58 minikube kubelet[3737]: E1013 00:11:58.145642    3737 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
Oct 13 00:11:58 minikube kubelet[3737]: E1013 00:11:58.147036    3737 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/f0f431d0-5fb4-45db-8878-769c2a94c9f6-config-volume podName:f0f431d0-5fb4-45db-8878-769c2a94c9f6 nodeName:}" failed. No retries permitted until 2020-10-13 00:11:58.646950263 +0000 UTC m=+22.644867924 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0f431d0-5fb4-45db-8878-769c2a94c9f6-config-volume\") pod \"coredns-f9fd979d6-tzb5n\" (UID: \"f0f431d0-5fb4-45db-8878-769c2a94c9f6\") : failed to sync configmap cache: timed out waiting for the condition"
Oct 13 00:11:59 minikube kubelet[3737]: W1013 00:11:59.401221    3737 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-tzb5n through plugin: invalid network status for
Oct 13 00:11:59 minikube kubelet[3737]: W1013 00:11:59.867468    3737 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-tzb5n through plugin: invalid network status for

==> storage-provisioner [b4f7beb7f7a7] <==
I1013 00:11:53.886189       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
I1013 00:11:53.893022       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1013 00:11:53.893270       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_4acc05df-3732-440a-8502-b99d9e2ef7e7!
I1013 00:11:53.893883       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99f4c873-9f0b-48d9-aad8-b76cfbd172e6", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_4acc05df-3732-440a-8502-b99d9e2ef7e7 became leader
I1013 00:11:53.993971       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_4acc05df-3732-440a-8502-b99d9e2ef7e7!
medyagh commented 3 years ago

@darkn3rd

depsite the error message, your cluster seems to be healthy

๐Ÿ„  Done! kubectl is now configured to use "minikube" by default
E1012 17:11:42.656583  101069 start.go:240] kubectl info: exec: exit status 1

that is an odd error message, do you mind checking if you have installed "kubectl"

and you mind sharing "minikube status" output ?

and also the output of this command ?

minikube kubectl -- get pods -A

/triage needs-information /kind support

medyagh commented 3 years ago

also not sure if you implied, do you NOT get an error without the appromor disalble?

sudo systemctl stop apparmor && sudo systemctl disable apparmor
darkn3rd commented 3 years ago

@medyagh When I run dmesg -w and run minikube start, I see apparmor deny messages for libvirt components. For this reason, I disabled it and restarted to rule out any issues with apparmor as a precaution.

Also for the requested command:

$ minikube kubectl -- get pods -A
    > kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubectl: 41.01 MiB / 41.01 MiB [--------------] 100.00% 138.66 MiB p/s 0s
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-f9fd979d6-kk6tn            1/1     Running   0          62s
kube-system   etcd-minikube                      0/1     Running   0          62s
kube-system   kube-apiserver-minikube            1/1     Running   0          62s
kube-system   kube-controller-manager-minikube   0/1     Running   0          62s
kube-system   kube-proxy-gl2b7                   1/1     Running   0          62s
kube-system   kube-scheduler-minikube            0/1     Running   0          62s
kube-system   storage-provisioner                1/1     Running   1          68s
darkn3rd commented 3 years ago

I had some weird bizarre version of kubectl, and have no idea how I got this one:

Client Version: version.Info{Major:"0", Minor:"10+", GitVersion:"v0.10.0-dirty", GitCommit:"71e26cbeb9c0bf1b6cc4c29b1f9d930e53513911", GitTreeState:"dirty"}

So I replaced it:

LATEST_KUBE=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)
curl -sOL https://storage.googleapis.com/kubernetes-release/release/$LATEST_KUBE/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:41:02Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:32:58Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}

And now it seems I am no longer getting this error. I guess we can close this issue.