kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.47k stars 4.89k forks source link

EOF accessing docker-env endpoint #7126

Closed peterlin741 closed 4 years ago

peterlin741 commented 4 years ago

If I attempt to run an application (cloud run) that starts a gcloud managed version of minkube (v1.7.3), while a local version of minikube (v1.8.2) is running and set as the current context, the sample fails and corrupts the state of docker. (other scenarios may fail as well)

Afterwards, if I switch to a different context (a gke cluster for example) and/or if I do a minikube delete and remove my local version of minikube from the kubectl context, the gcloud managed version of minikube somtimes still may not be able to start through IntelliJ, when running the sample. (what a normal user would try)

However, after deleting both minikube contexts, running minikube delete --all --purge, and restarting docker, rerunning the sample locally is able to work.

The exact command to reproduce the issue minikube start --profile cloud-code-minikube --keep-context true --wait false --vm-driver docker --interactive false --alsologtostderr -v=8, which is run from my application

Here are some of the docker errors

time="2020-03-20T17:30:31-04:00" level=warning msg="error checking cache, caching may not work as expected: getting imageID for java-cloud-run-local:latest: inspecting image: error during connect: Get https://127.0.0.1:32769/v1.24/images/java-cloud-run-local:latest/json: EOF"
time="2020-03-20T17:30:35-04:00" level=fatal msg="exiting dev mode because first build failed: build failed: building [java-cloud-run-local]: build artifact: docker build: error during connect: Post https://127.0.0.1:32769/v1.24/build?buildargs=null&cachefrom=null&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=null&memory=0&memswap=0&networkmode=&rm=0&shmsize=0&t=java-cloud-run-local%3Alatest&target=&ulimits=null&version=: EOF"

The full output of the command that failed (mixed in with application specific logs):

Checking status of cloud-code-minikube Minikube cluster...
Status of Minikube api-server is STOPPED
Starting Minikube with cloud-code-minikube profile...
/Users/petlin/Library/Application Support/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/bin/minikube start --profile cloud-code-minikube --keep-context true --wait false --vm-driver docker --interactive false --alsologtostderr -v=8
* [cloud-code-minikube] minikube v1.7.3 on Darwin 10.15.3
* Using the docker (experimental) driver based on user configuration
* Reconfiguring existing host ...
* Starting existing docker VM for "cloud-code-minikube" ...
* Preparing Kubernetes v1.17.3 on Docker 19.03.2 ...
  - kubeadm.pod-network-cidr=10.244.0.0/16
* Launching Kubernetes ... 
* Enabling addons: default-storageclass, storage-provisioner
* To connect to this cluster, use: kubectl --context=cloud-code-minikube
I0320 16:50:48.578664   37383 start.go:249] hostinfo: {"hostname":"skellige.roam.corp.google.com","uptime":15037,"bootTime":1584722411,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"","platformVersion":"10.15.3","kernelVersion":"19.3.0","virtualizationSystem":"","virtualizationRole":"","hostid":"a8dde75c-cd97-3c37-b35d-1070cc50d2ce"}
W0320 16:50:48.578779   37383 start.go:257] gopshost.Virtualization returned error: not implemented yet
I0320 16:50:48.579480   37383 driver.go:211] Setting default libvirt URI to qemu:///system
I0320 16:50:48.661138   37383 start.go:296] selected driver: docker
I0320 16:50:48.661170   37383 start.go:472] validating driver "docker" against &{Name:cloud-code-minikube KeepContext:true EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.7.3.iso Memory:2000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false Downloader:<nil> DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.17.3 ClusterName:cloud-code-minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false} Nodes:[{Name:cloud-code-minikube IP:172.17.0.2 Port:8443 KubernetesVersion:v1.17.3 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true]}
I0320 16:50:48.661253   37383 start.go:478] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0320 16:50:48.661693   37383 start.go:860] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0320 16:50:48.661922   37383 profile.go:100] Saving config to /Users/petlin/.minikube/profiles/cloud-code-minikube/config.json ...
I0320 16:50:48.662014   37383 cache.go:91] acquiring lock: {Name:mkdc20a250d444a26b273d9a14b2e1efe5cadd15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0320 16:50:48.662014   37383 cache.go:91] acquiring lock: {Name:mkab2150517e839ea105bbd638c14fb6957eb81b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0320 16:50:48.662100   37383 cache.go:91] acquiring lock: {Name:mkc7af618bbfed8a8f955a720e4b153a9f8d8a01 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0320 16:50:48.662100   37383 cache.go:91] acquiring lock: {Name:mk24b54bbc72cb52587def613da08724a7c3b6d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0320 16:50:48.662114   37383 cache.go:91] acquiring lock: {Name:mkcfa7b06299cce3471e9ad719734c5c2c7e9112 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0320 16:50:48.663413   37383 cache.go:91] acquiring lock: {Name:mkee4de3a1925b104c78f0f6280d225a4ae891ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0320 16:50:48.663415   37383 cache.go:91] acquiring lock: {Name:mkfe4673b23b985c476c721dee8046c05e60f547 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0320 16:50:48.663309   37383 cache.go:91] acquiring lock: {Name:mk5f82f3edc043040fc3d1b0a4d9a34cc977f901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0320 16:50:48.663491   37383 cache.go:91] acquiring lock: {Name:mk1caf020aec135110e9c5080de8c29296ca3590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0320 16:50:48.663622   37383 cache.go:99] /Users/petlin/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 exists
I0320 16:50:48.663638   37383 cache.go:80] cache image "k8s.gcr.io/kube-proxy:v1.17.3" -> "/Users/petlin/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3" took 1.611396ms
I0320 16:50:48.663650   37383 cache.go:65] save to tar file k8s.gcr.io/kube-proxy:v1.17.3 -> /Users/petlin/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.3 succeeded
I0320 16:50:48.663680   37383 cache.go:99] /Users/petlin/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2 exists
I0320 16:50:48.663690   37383 cache.go:80] cache image "kubernetesui/metrics-scraper:v1.0.2" -> "/Users/petlin/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2" took 1.699105ms
I0320 16:50:48.663698   37383 cache.go:65] save to tar file kubernetesui/metrics-scraper:v1.0.2 -> /Users/petlin/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2 succeeded
I0320 16:50:48.663711   37383 cache.go:99] /Users/petlin/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta8 exists
I0320 16:50:48.663725   37383 cache.go:80] cache image "kubernetesui/dashboard:v2.0.0-beta8" -> "/Users/petlin/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta8" took 419.781µs
I0320 16:50:48.663737   37383 cache.go:99] /Users/petlin/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 exists
I0320 16:50:48.663740   37383 cache.go:99] /Users/petlin/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 exists
I0320 16:50:48.663736   37383 cache.go:65] save to tar file kubernetesui/dashboard:v2.0.0-beta8 -> /Users/petlin/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta8 succeeded
I0320 16:50:48.663753   37383 cache.go:80] cache image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" -> "/Users/petlin/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1" took 1.624351ms
I0320 16:50:48.663753   37383 cache.go:80] cache image "k8s.gcr.io/kube-scheduler:v1.17.3" -> "/Users/petlin/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3" took 1.758597ms
I0320 16:50:48.663773   37383 cache.go:65] save to tar file k8s.gcr.io/kube-scheduler:v1.17.3 -> /Users/petlin/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.3 succeeded
I0320 16:50:48.663771   37383 cache.go:65] save to tar file gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /Users/petlin/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 succeeded
I0320 16:50:48.663780   37383 cache.go:99] /Users/petlin/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 exists
I0320 16:50:48.663790   37383 cache.go:80] cache image "k8s.gcr.io/kube-apiserver:v1.17.3" -> "/Users/petlin/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3" took 1.752963ms
I0320 16:50:48.663804   37383 cache.go:65] save to tar file k8s.gcr.io/kube-apiserver:v1.17.3 -> /Users/petlin/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.3 succeeded
I0320 16:50:48.663790   37383 cache.go:91] acquiring lock: {Name:mk0398aeac1974c1b2f2ac2cbd729f453db73197 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0320 16:50:48.663968   37383 cache.go:99] /Users/petlin/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 exists
I0320 16:50:48.663970   37383 cache.go:99] /Users/petlin/.minikube/cache/images/k8s.gcr.io/pause_3.1 exists
I0320 16:50:48.663981   37383 cache.go:80] cache image "k8s.gcr.io/kube-controller-manager:v1.17.3" -> "/Users/petlin/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3" took 1.931249ms
I0320 16:50:48.663990   37383 cache.go:65] save to tar file k8s.gcr.io/kube-controller-manager:v1.17.3 -> /Users/petlin/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.3 succeeded
I0320 16:50:48.663990   37383 cache.go:80] cache image "k8s.gcr.io/pause:3.1" -> "/Users/petlin/.minikube/cache/images/k8s.gcr.io/pause_3.1" took 201.706µs
I0320 16:50:48.664000   37383 cache.go:65] save to tar file k8s.gcr.io/pause:3.1 -> /Users/petlin/.minikube/cache/images/k8s.gcr.io/pause_3.1 succeeded
I0320 16:50:48.664029   37383 cache.go:99] /Users/petlin/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
I0320 16:50:48.664048   37383 cache.go:80] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/petlin/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 2.038704ms
I0320 16:50:48.664059   37383 cache.go:65] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /Users/petlin/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
I0320 16:50:48.664162   37383 cache.go:99] /Users/petlin/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 exists
I0320 16:50:48.664202   37383 cache.go:80] cache image "k8s.gcr.io/coredns:1.6.5" -> "/Users/petlin/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5" took 2.139711ms
I0320 16:50:48.664212   37383 cache.go:65] save to tar file k8s.gcr.io/coredns:1.6.5 -> /Users/petlin/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5 succeeded
I0320 16:50:48.664222   37383 cache.go:72] Successfully saved all images to host disk.
I0320 16:50:48.664352   37383 start.go:244] acquiring machines lock for cloud-code-minikube: {Name:mk0e251c7c3ad68a82faf3ad5ce28ab70fbf9448 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0320 16:50:48.664423   37383 start.go:248] acquired machines lock for "cloud-code-minikube" in 60.446µs
I0320 16:50:48.664475   37383 start.go:84] Skipping create...Using existing machine configuration
I0320 16:50:48.664536   37383 fix.go:61] fixHost starting: cloud-code-minikube
I0320 16:50:49.072803   37383 start.go:182] post-start starting for "cloud-code-minikube" (driver="docker")
I0320 16:50:49.072823   37383 start.go:192] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0320 16:50:49.072834   37383 start.go:215] determining appropriate runner for "docker"
I0320 16:50:49.072840   37383 start.go:226] Returning KICRunner for "docker" driver
I0320 16:50:49.234130   37383 filesync.go:118] Scanning /Users/petlin/.minikube/addons for local assets ...
I0320 16:50:49.234333   37383 filesync.go:118] Scanning /Users/petlin/.minikube/files for local assets ...
I0320 16:50:49.234389   37383 start.go:185] post-start completed in 161.558114ms
I0320 16:50:49.234398   37383 fix.go:141] Configuring auth for driver docker ...
I0320 16:50:49.234437   37383 main.go:110] libmachine: Waiting for SSH to be available...
I0320 16:50:49.234449   37383 main.go:110] libmachine: Getting to WaitForSSH function...
I0320 16:50:49.273333   37383 main.go:110] libmachine: Using SSH client type: native
I0320 16:50:49.273575   37383 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x13b2a20] 0x13b29f0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0320 16:50:49.273585   37383 main.go:110] libmachine: About to run SSH command:
exit 0
I0320 16:50:49.276220   37383 main.go:110] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0320 16:50:52.420530   37383 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0320 16:50:52.420548   37383 main.go:110] libmachine: Detecting the provisioner...
I0320 16:50:52.457332   37383 main.go:110] libmachine: Using SSH client type: native
I0320 16:50:52.457541   37383 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x13b2a20] 0x13b29f0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0320 16:50:52.457549   37383 main.go:110] libmachine: About to run SSH command:
cat /etc/os-release
I0320 16:50:52.585321   37383 main.go:110] libmachine: SSH cmd err, output: <nil>: NAME="Ubuntu"
VERSION="19.10 (Eoan Ermine)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 19.10"
VERSION_ID="19.10"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=eoan
UBUNTU_CODENAME=eoan

I0320 16:50:52.585383   37383 main.go:110] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0320 16:50:52.585408   37383 main.go:110] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0320 16:50:52.585418   37383 main.go:110] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0320 16:50:52.586172   37383 main.go:110] libmachine: found compatible host: ubuntu
I0320 16:50:52.586193   37383 main.go:110] libmachine: provisioning hostname "cloud-code-minikube"
I0320 16:50:52.622552   37383 main.go:110] libmachine: Using SSH client type: native
I0320 16:50:52.622742   37383 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x13b2a20] 0x13b29f0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0320 16:50:52.622753   37383 main.go:110] libmachine: About to run SSH command:
sudo hostname cloud-code-minikube && echo "cloud-code-minikube" | sudo tee /etc/hostname
I0320 16:50:52.765585   37383 main.go:110] libmachine: SSH cmd err, output: <nil>: cloud-code-minikube

I0320 16:50:52.805800   37383 main.go:110] libmachine: Using SSH client type: native
I0320 16:50:52.805982   37383 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x13b2a20] 0x13b29f0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0320 16:50:52.805995   37383 main.go:110] libmachine: About to run SSH command:

        if ! grep -xq '.*\scloud-code-minikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cloud-code-minikube/g' /etc/hosts;
            else 
                echo '127.0.1.1 cloud-code-minikube' | sudo tee -a /etc/hosts; 
            fi
        fi
I0320 16:50:52.933430   37383 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0320 16:50:52.933508   37383 main.go:110] libmachine: set auth options {CertDir:/Users/petlin/.minikube CaCertPath:/Users/petlin/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/petlin/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/petlin/.minikube/machines/server.pem ServerKeyPath:/Users/petlin/.minikube/machines/server-key.pem ClientKeyPath:/Users/petlin/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/petlin/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/petlin/.minikube}
I0320 16:50:52.933517   37383 main.go:110] libmachine: setting up certificates
I0320 16:50:52.933530   37383 main.go:110] libmachine: configureAuth start
I0320 16:50:52.970107   37383 main.go:110] libmachine: copyHostCerts
I0320 16:50:52.970156   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/certs/key.pem -> /Users/petlin/.minikube/key.pem
I0320 16:50:52.970829   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/certs/ca.pem -> /Users/petlin/.minikube/ca.pem
I0320 16:50:52.971105   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/certs/cert.pem -> /Users/petlin/.minikube/cert.pem
I0320 16:50:52.971422   37383 main.go:110] libmachine: generating server cert: /Users/petlin/.minikube/machines/server.pem ca-key=/Users/petlin/.minikube/certs/ca.pem private-key=/Users/petlin/.minikube/certs/ca-key.pem org=petlin.cloud-code-minikube san=[172.17.0.2 localhost 127.0.0.1]
I0320 16:50:53.068666   37383 main.go:110] libmachine: copyRemoteCerts
I0320 16:50:53.147408   37383 ssh_runner.go:101] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0320 16:50:53.208741   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/machines/server.pem -> /etc/docker/server.pem
I0320 16:50:53.215753   37383 ssh_runner.go:155] Checked if /etc/docker/server.pem exists, but got error: source file and destination file are different sizes
I0320 16:50:53.216768   37383 ssh_runner.go:174] Transferring 1135 bytes to /etc/docker/server.pem
I0320 16:50:53.218099   37383 ssh_runner.go:193] server.pem: copied 1135 bytes
I0320 16:50:53.240590   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0320 16:50:53.244737   37383 ssh_runner.go:155] Checked if /etc/docker/server-key.pem exists, but got error: source file and destination file are different sizes
I0320 16:50:53.245512   37383 ssh_runner.go:174] Transferring 1675 bytes to /etc/docker/server-key.pem
I0320 16:50:53.246599   37383 ssh_runner.go:193] server-key.pem: copied 1675 bytes
I0320 16:50:53.265941   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0320 16:50:53.270773   37383 ssh_runner.go:174] Transferring 1038 bytes to /etc/docker/ca.pem
I0320 16:50:53.273279   37383 ssh_runner.go:193] ca.pem: copied 1038 bytes
I0320 16:50:53.291328   37383 main.go:110] libmachine: configureAuth took 357.741035ms
I0320 16:50:53.291593   37383 main.go:110] libmachine: setting minikube options for container-runtime
I0320 16:50:53.328723   37383 main.go:110] libmachine: Using SSH client type: native
I0320 16:50:53.328962   37383 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x13b2a20] 0x13b29f0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0320 16:50:53.328971   37383 main.go:110] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0320 16:50:53.457081   37383 main.go:110] libmachine: SSH cmd err, output: <nil>: overlay

I0320 16:50:53.457106   37383 main.go:110] libmachine: root file system type: overlay
I0320 16:50:53.458035   37383 main.go:110] libmachine: Setting Docker configuration on the remote daemon...
I0320 16:50:53.494512   37383 main.go:110] libmachine: Using SSH client type: native
I0320 16:50:53.494686   37383 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x13b2a20] 0x13b29f0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0320 16:50:53.494746   37383 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service
I0320 16:50:53.631818   37383 main.go:110] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0320 16:50:53.675372   37383 main.go:110] libmachine: Using SSH client type: native
I0320 16:50:53.675532   37383 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x13b2a20] 0x13b29f0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0320 16:50:53.675541   37383 main.go:110] libmachine: About to run SSH command:
sudo systemctl -f enable docker
I0320 16:50:53.871806   37383 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0320 16:50:53.909225   37383 main.go:110] libmachine: Using SSH client type: native
I0320 16:50:53.909474   37383 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x13b2a20] 0x13b29f0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0320 16:50:53.909494   37383 main.go:110] libmachine: About to run SSH command:
sudo systemctl daemon-reload
I0320 16:50:54.099587   37383 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0320 16:50:54.136573   37383 main.go:110] libmachine: Using SSH client type: native
I0320 16:50:54.136810   37383 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x13b2a20] 0x13b29f0 <nil>  [] 0s} 127.0.0.1 32770 <nil> <nil>}
I0320 16:50:54.136819   37383 main.go:110] libmachine: About to run SSH command:
sudo systemctl -f restart docker
I0320 16:50:54.620626   37383 main.go:110] libmachine: SSH cmd err, output: <nil>: 
I0320 16:50:54.620645   37383 fix.go:63] fixHost completed within 5.956023039s
I0320 16:50:54.620651   37383 start.go:72] releasing machines lock for "cloud-code-minikube", held for 5.956099177s
I0320 16:50:55.100154   37383 profile.go:100] Saving config to /Users/petlin/.minikube/profiles/cloud-code-minikube/config.json ...
I0320 16:50:56.438172   37383 settings.go:123] acquiring lock: {Name:mkfdc6b5352268f5e506c1dfc9a8b416859f9165 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0320 16:50:56.438294   37383 settings.go:131] Updating kubeconfig:  /Users/petlin/.kube/config
I0320 16:50:56.440324   37383 lock.go:35] WriteFile acquiring /Users/petlin/.kube/config: {Name:mka3ea2186823c607d8241c3018ac69cd0e357ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0320 16:50:56.440821   37383 cache_images.go:65] LoadImages start: [k8s.gcr.io/kube-proxy:v1.17.3 k8s.gcr.io/kube-scheduler:v1.17.3 k8s.gcr.io/kube-controller-manager:v1.17.3 k8s.gcr.io/kube-apiserver:v1.17.3 k8s.gcr.io/coredns:1.6.5 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/pause:3.1 gcr.io/k8s-minikube/storage-provisioner:v1.8.1 kubernetesui/dashboard:v2.0.0-beta8 kubernetesui/metrics-scraper:v1.0.2]
I0320 16:50:56.444226   37383 image.go:43] couldn't find image digest kubernetesui/metrics-scraper:v1.0.2 from local daemon: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40 
I0320 16:50:56.444262   37383 image.go:43] couldn't find image digest k8s.gcr.io/kube-controller-manager:v1.17.3 from local daemon: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40 
I0320 16:50:56.444267   37383 image.go:73] retrieving image: kubernetesui/metrics-scraper:v1.0.2
I0320 16:50:56.444275   37383 image.go:73] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.3
I0320 16:50:56.445103   37383 image.go:43] couldn't find image digest k8s.gcr.io/kube-apiserver:v1.17.3 from local daemon: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40 
I0320 16:50:56.445125   37383 image.go:73] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.3
I0320 16:50:56.445663   37383 image.go:43] couldn't find image digest kubernetesui/dashboard:v2.0.0-beta8 from local daemon: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40 
I0320 16:50:56.445703   37383 image.go:73] retrieving image: kubernetesui/dashboard:v2.0.0-beta8
I0320 16:50:56.446032   37383 image.go:43] couldn't find image digest k8s.gcr.io/pause:3.1 from local daemon: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40 
I0320 16:50:56.446045   37383 image.go:73] retrieving image: k8s.gcr.io/pause:3.1
I0320 16:50:56.447008   37383 image.go:43] couldn't find image digest k8s.gcr.io/coredns:1.6.5 from local daemon: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40 
I0320 16:50:56.447024   37383 image.go:73] retrieving image: k8s.gcr.io/coredns:1.6.5
I0320 16:50:56.447194   37383 image.go:43] couldn't find image digest gcr.io/k8s-minikube/storage-provisioner:v1.8.1 from local daemon: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40 
I0320 16:50:56.447207   37383 image.go:73] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
I0320 16:50:56.447924   37383 image.go:43] couldn't find image digest k8s.gcr.io/kube-scheduler:v1.17.3 from local daemon: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40 
I0320 16:50:56.447945   37383 image.go:73] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.3
I0320 16:50:56.449768   37383 image.go:43] couldn't find image digest k8s.gcr.io/kube-proxy:v1.17.3 from local daemon: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40 
I0320 16:50:56.449787   37383 image.go:73] retrieving image: k8s.gcr.io/kube-proxy:v1.17.3
I0320 16:50:56.450670   37383 image.go:43] couldn't find image digest k8s.gcr.io/etcd:3.4.3-0 from local daemon: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40 
I0320 16:50:56.450684   37383 image.go:73] retrieving image: k8s.gcr.io/etcd:3.4.3-0
I0320 16:50:56.456181   37383 image.go:81] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.3: Error response from daemon: reference does not exist
I0320 16:50:56.458364   37383 image.go:81] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.3: Error response from daemon: reference does not exist
I0320 16:50:56.458367   37383 image.go:81] daemon lookup for kubernetesui/metrics-scraper:v1.0.2: Error response from daemon: reference does not exist
I0320 16:50:56.458961   37383 image.go:81] daemon lookup for kubernetesui/dashboard:v2.0.0-beta8: Error response from daemon: reference does not exist
I0320 16:50:56.460090   37383 image.go:81] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
I0320 16:50:56.462172   37383 image.go:81] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.3: Error response from daemon: reference does not exist
I0320 16:50:56.462203   37383 image.go:81] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v1.8.1: Error response from daemon: reference does not exist
I0320 16:50:56.463384   37383 image.go:81] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
I0320 16:50:56.463591   37383 image.go:81] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.3: Error response from daemon: reference does not exist
I0320 16:50:56.464433   37383 image.go:81] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
I0320 16:50:58.105579   37383 cache_images.go:98] Successfully loaded all cached images
I0320 16:50:58.105601   37383 cache_images.go:69] LoadImages completed in 1.664740374s
I0320 16:50:58.105689   37383 kubeadm.go:119] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.0.2 APIServerPort:8443 KubernetesVersion:v1.17.3 EtcdDataDir:/var/lib/minikube/etcd ClusterName:cloud-code-minikube NodeName:cloud-code-minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.0.2"]]}] FeatureArgs:map[] NoTaintMaster:true}
I0320 16:50:58.105791   37383 kubeadm.go:123] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "cloud-code-minikube"
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "172.17.0.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.17.3
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12

I0320 16:50:58.373932   37383 kubeadm.go:431] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.17.3/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=cloud-code-minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.0.2 --pod-manifest-path=/etc/kubernetes/manifests

[Install]
 config:
{KubernetesVersion:v1.17.3 ClusterName:cloud-code-minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false}
I0320 16:50:58.729534   37383 cache_binaries.go:93] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl
I0320 16:50:58.729574   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/cache/linux/v1.17.3/kubectl -> /var/lib/minikube/binaries/v1.17.3/kubectl
I0320 16:50:58.729534   37383 cache_binaries.go:93] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubelet
I0320 16:50:58.729598   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/cache/linux/v1.17.3/kubelet -> /var/lib/minikube/binaries/v1.17.3/kubelet
I0320 16:50:58.729534   37383 cache_binaries.go:93] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubeadm
I0320 16:50:58.729625   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/cache/linux/v1.17.3/kubeadm -> /var/lib/minikube/binaries/v1.17.3/kubeadm
I0320 16:51:04.216364   37383 certs.go:59] Setting up /Users/petlin/.minikube for IP: 172.17.0.2
I0320 16:51:04.216409   37383 certs.go:68] acquiring lock: {Name:mk0460c68c311028d33432ec2e1699fef7f847d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0320 16:51:04.216684   37383 crypto.go:69] Generating cert /Users/petlin/.minikube/client.crt with IP's: []
I0320 16:51:04.219340   37383 crypto.go:157] Writing cert to /Users/petlin/.minikube/client.crt ...
I0320 16:51:04.219352   37383 lock.go:35] WriteFile acquiring /Users/petlin/.minikube/client.crt: {Name:mk75c89578e70204c6eed4e4af943df971ab7436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0320 16:51:04.220310   37383 crypto.go:165] Writing key to /Users/petlin/.minikube/client.key ...
I0320 16:51:04.220320   37383 lock.go:35] WriteFile acquiring /Users/petlin/.minikube/client.key: {Name:mka966bc7f13e98493edabd23fa33e3e5ae16c66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0320 16:51:04.220520   37383 crypto.go:69] Generating cert /Users/petlin/.minikube/apiserver.crt with IP's: [172.17.0.2 10.96.0.1 127.0.0.1 10.0.0.1]
I0320 16:51:04.223268   37383 crypto.go:157] Writing cert to /Users/petlin/.minikube/apiserver.crt ...
I0320 16:51:04.223277   37383 lock.go:35] WriteFile acquiring /Users/petlin/.minikube/apiserver.crt: {Name:mka1b95666febf32893e1c569816017541e3efa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0320 16:51:04.223464   37383 crypto.go:165] Writing key to /Users/petlin/.minikube/apiserver.key ...
I0320 16:51:04.223471   37383 lock.go:35] WriteFile acquiring /Users/petlin/.minikube/apiserver.key: {Name:mke574df3139a3973e657f213aeff93604637987 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0320 16:51:04.223615   37383 crypto.go:69] Generating cert /Users/petlin/.minikube/proxy-client.crt with IP's: []
I0320 16:51:04.226083   37383 crypto.go:157] Writing cert to /Users/petlin/.minikube/proxy-client.crt ...
I0320 16:51:04.226090   37383 lock.go:35] WriteFile acquiring /Users/petlin/.minikube/proxy-client.crt: {Name:mkc1d1d710b93b0319dc5e29efbae1d4f4c33161 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0320 16:51:04.226265   37383 crypto.go:165] Writing key to /Users/petlin/.minikube/proxy-client.key ...
I0320 16:51:04.226271   37383 lock.go:35] WriteFile acquiring /Users/petlin/.minikube/proxy-client.key: {Name:mkcde71162eb13642c452bbed01ee3c8f925e7a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0320 16:51:04.226431   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0320 16:51:04.226462   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0320 16:51:04.226482   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0320 16:51:04.226503   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0320 16:51:04.226522   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0320 16:51:04.226542   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0320 16:51:04.226560   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0320 16:51:04.226578   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0320 16:51:04.227580   37383 vm_assets.go:90] NewFileAsset: /Users/petlin/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0320 16:51:06.536892   37383 kubeadm.go:299] restartCluster start
I0320 16:51:06.653778   37383 kubeadm.go:139] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: exit status 1
stdout:

stderr:
I0320 16:51:08.659700   37383 kic_runner.go:117] Done: [docker exec --privileged cloud-code-minikube /bin/bash -c sudo env PATH=/var/lib/minikube/binaries/v1.17.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml]: (1.800773851s)
I0320 16:51:09.056584   37383 kverify.go:42] waiting for apiserver process to appear ...
I0320 16:51:09.173180   37383 kverify.go:56] duration metric: took 116.597531ms to wait for apiserver process to appear ...
I0320 16:51:09.212837   37383 kapi.go:58] client config for cloud-code-minikube: &rest.Config{Host:"https://127.0.0.1:32768", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/petlin/.minikube/client.crt", KeyFile:"/Users/petlin/.minikube/client.key", CAFile:"/Users/petlin/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20ca940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
I0320 16:51:09.217153   37383 kverify.go:72] waiting for kube-system pods to appear ...
I0320 16:51:10.637574   37383 kverify.go:84] 9 kube-system pods found
I0320 16:51:10.637646   37383 kverify.go:93] duration metric: took 1.420448499s to wait for pod list to return data ...
I0320 16:51:11.233599   37383 rbac.go:81] apiserver oom_adj: -16
I0320 16:51:11.233621   37383 kubeadm.go:303] restartCluster took 4.696615377s
I0320 16:51:11.233648   37383 addons.go:271] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
I0320 16:51:11.234283   37383 addons.go:60] IsEnabled "default-storageclass" = false (listed in config=false)
I0320 16:51:11.234419   37383 addons.go:60] IsEnabled "storage-provisioner" = false (listed in config=false)
I0320 16:51:11.234503   37383 addons.go:60] IsEnabled "istio" = false (listed in config=false)
I0320 16:51:11.234590   37383 addons.go:60] IsEnabled "logviewer" = false (listed in config=false)
I0320 16:51:11.234659   37383 addons.go:60] IsEnabled "gvisor" = false (listed in config=false)
I0320 16:51:11.234748   37383 addons.go:60] IsEnabled "storage-provisioner-gluster" = false (listed in config=false)
I0320 16:51:11.234836   37383 addons.go:60] IsEnabled "ingress" = false (listed in config=false)
I0320 16:51:11.234910   37383 addons.go:60] IsEnabled "metrics-server" = false (listed in config=false)
I0320 16:51:11.234971   37383 addons.go:60] IsEnabled "registry-creds" = false (listed in config=false)
I0320 16:51:11.235041   37383 addons.go:60] IsEnabled "dashboard" = false (listed in config=false)
I0320 16:51:11.235110   37383 addons.go:60] IsEnabled "istio-provisioner" = false (listed in config=false)
I0320 16:51:11.235167   37383 addons.go:60] IsEnabled "nvidia-driver-installer" = false (listed in config=false)
I0320 16:51:11.235227   37383 addons.go:60] IsEnabled "nvidia-gpu-device-plugin" = false (listed in config=false)
I0320 16:51:11.235292   37383 addons.go:60] IsEnabled "ingress-dns" = false (listed in config=false)
I0320 16:51:11.235354   37383 addons.go:60] IsEnabled "efk" = false (listed in config=false)
I0320 16:51:11.235412   37383 addons.go:60] IsEnabled "registry" = false (listed in config=false)
I0320 16:51:11.235480   37383 addons.go:60] IsEnabled "freshpod" = false (listed in config=false)
I0320 16:51:11.235574   37383 addons.go:60] IsEnabled "helm-tiller" = false (listed in config=false)
I0320 16:51:11.235652   37383 addons.go:45] Setting default-storageclass=true in profile "cloud-code-minikube"
I0320 16:51:11.235745   37383 addons.go:225] enableOrDisableStorageClasses default-storageclass=true on "cloud-code-minikube"
! Enabling 'default-storageclass' returned an error: running callbacks: [Error getting storagev1 interface Error creating new client from kubeConfig.ClientConfig(): no Auth Provider found for name "gcp" : Error creating new client from kubeConfig.ClientConfig(): no Auth Provider found for name "gcp"]
I0320 16:51:11.236708   37383 addons.go:45] Setting storage-provisioner=true in profile "cloud-code-minikube"
I0320 16:51:11.236833   37383 addons.go:105] Setting addon storage-provisioner=true in "cloud-code-minikube"
I0320 16:51:11.236903   37383 addons.go:60] IsEnabled "storage-provisioner" = false (listed in config=false)
W0320 16:51:11.236910   37383 addons.go:120] addon storage-provisioner should already be in state true
I0320 16:51:11.236975   37383 status.go:65] Checking if "cloud-code-minikube" exists ...
I0320 16:51:11.283350   37383 addons.go:193] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0320 16:51:11.441739   37383 addons.go:214] Running: /usr/bin/sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0320 16:51:12.195472   37383 addons.go:219] output:
-- stdout --
serviceaccount/storage-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
pod/storage-provisioner configured

-- /stdout --
I0320 16:51:12.195496   37383 addons.go:71] Writing out "cloud-code-minikube" config to set storage-provisioner=true...
I0320 16:51:12.195969   37383 addons.go:273] enableAddons completed in 962.301178ms
I0320 16:51:12.247014   37383 start.go:426] kubectl: 1.17.0, cluster: 1.17.3 (minor skew: 0)

Minikube started successfully
Starting Cloud Run dev session...
/Users/petlin/Library/Application Support/cloud-code/bin/versions/7f464904bdb09d835e156f2b0d56a7b52c4a42a3812696c6b16df1eb7511f50b/skaffold dev --filename /var/folders/fm/df9xg3fn0gsgb5ptll9j732h00pwwy/T/cloud-code-temp-skaffold18149706552506538321.tmp --rpc-port 50051 --port-forward=true --status-check=true --enable-rpc=true --kube-context cloud-code-minikube --minikube-profile cloud-code-minikube
Listing files to watch...
 - java-cloud-run-local
Generating tags...
 - java-cloud-run-local -> java-cloud-run-local:latest
Checking cache...
 - java-cloud-run-local: Error checking cache. Rebuilding.
Found [cloud-code-minikube] context, using local docker daemon.
Building [java-cloud-run-local]...

time="2020-03-20T16:51:14-04:00" level=warning msg="error checking cache, caching may not work as expected: getting imageID for java-cloud-run-local:latest: inspecting image: error during connect: Get https://127.0.0.1:32769/v1.24/images/java-cloud-run-local:latest/json: EOF"
time="2020-03-20T16:51:18-04:00" level=fatal msg="exiting dev mode because first build failed: build failed: building [java-cloud-run-local]: build artifact: docker build: error during connect: Post https://127.0.0.1:32769/v1.24/build?buildargs=null&cachefrom=null&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=null&memory=0&memswap=0&networkmode=&rm=0&shmsize=0&t=java-cloud-run-local%3Alatest&target=&ulimits=null&version=: EOF"

Failed to start Cloud Run dev session

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Fri 2020-03-20 21:47:23 UTC, end at Fri 2020-03-20 22:17:49 UTC. --
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.944995605Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945030404Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945096631Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945199599Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945487518Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945542072Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945594643Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945635214Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945670913Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945705110Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945739058Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945776647Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945811844Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945845565Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945884386Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945932926Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.945973814Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.946008555Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.946042711Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.946218004Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.946278872Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.946315070Z" level=info msg="containerd successfully booted in 0.004134s"
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.954446875Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.954558769Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.954617548Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.954668522Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.955652502Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.955734343Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.955789795Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0  <nil>}] <nil>}" module=grpc
Mar 20 21:47:36 minikube dockerd[2122]: time="2020-03-20T21:47:36.955829007Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.424091107Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.424223416Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.424275420Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.424330178Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.424387953Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.424435888Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.424654080Z" level=info msg="Loading containers: start."
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.518225021Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.552968946Z" level=info msg="Loading containers: done."
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.879911690Z" level=info msg="Docker daemon" commit=369ce74a3c graphdriver(s)=overlay2 version=19.03.6
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.880294321Z" level=info msg="Daemon has completed initialization"
Mar 20 21:47:37 minikube systemd[1]: Started Docker Application Container Engine.
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.908845078Z" level=info msg="API listen on /var/run/docker.sock"
Mar 20 21:47:37 minikube dockerd[2122]: time="2020-03-20T21:47:37.908998896Z" level=info msg="API listen on [::]:2376"
Mar 20 21:47:49 minikube dockerd[2122]: time="2020-03-20T21:47:49.283691123Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1145d28be9f378761688b5e810ad26c35fa47c8ed7f13ac79e271dd983b510fa/shim.sock" debug=false pid=2989
Mar 20 21:47:49 minikube dockerd[2122]: time="2020-03-20T21:47:49.296765162Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bc99a35300281c005f187b0634da15755ee48b004725c7381ccb6133abc7564f/shim.sock" debug=false pid=2996
Mar 20 21:47:49 minikube dockerd[2122]: time="2020-03-20T21:47:49.301279951Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d9cb9cb49f71dc4046671f547013a20f05e440ecc108ad3acff2f66f2f7fd29a/shim.sock" debug=false pid=3003
Mar 20 21:47:49 minikube dockerd[2122]: time="2020-03-20T21:47:49.354417293Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/28e4727b769bbf4d87c18fe87d805130d318b06089d858ae7c811b0d846c77d3/shim.sock" debug=false pid=3045
Mar 20 21:47:49 minikube dockerd[2122]: time="2020-03-20T21:47:49.572434211Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d109590011eb0b263d4dd0ccdd991f9db187ee672be4603213fcc4789719f99e/shim.sock" debug=false pid=3161
Mar 20 21:47:49 minikube dockerd[2122]: time="2020-03-20T21:47:49.585282893Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6c18e5221d8db0f2bc05306e5ffe95648c4b6bfe6cd3b07e7836310fde2dc8c9/shim.sock" debug=false pid=3177
Mar 20 21:47:49 minikube dockerd[2122]: time="2020-03-20T21:47:49.596509359Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2e10cf8029978cf81941d403378aedc4c7a0c76c7797c6b25c4a913575e0d8d0/shim.sock" debug=false pid=3187
Mar 20 21:47:49 minikube dockerd[2122]: time="2020-03-20T21:47:49.604843870Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2445450da9e866c14fa59e37f8404f7a41ac8df287ddd32d02e3186639e78765/shim.sock" debug=false pid=3190
Mar 20 21:48:05 minikube dockerd[2122]: time="2020-03-20T21:48:05.569829158Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1f1de65e350db5b58a4460de2c2a2699835798790fc627d59cd3d09327493ce9/shim.sock" debug=false pid=3800
Mar 20 21:48:05 minikube dockerd[2122]: time="2020-03-20T21:48:05.730101513Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a660401e61f3180079969730a039c747789ab8fdf9aa85cc21062bbdc65e5cd1/shim.sock" debug=false pid=3842
Mar 20 21:48:08 minikube dockerd[2122]: time="2020-03-20T21:48:08.409653551Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a0c8ebf1ca92f6711f56a8af19cdfd74e42937e0e145f1857063184fb8797510/shim.sock" debug=false pid=3966
Mar 20 21:48:08 minikube dockerd[2122]: time="2020-03-20T21:48:08.560860300Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/12838fde2ad1f745792019b386e179e289d8fbe42ee87b04583f915711671953/shim.sock" debug=false pid=4001
Mar 20 21:48:10 minikube dockerd[2122]: time="2020-03-20T21:48:10.929309442Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/acf52a0976695ead23168985bdedfee7ed78ce3153b373a53cbcf9798e6fccc3/shim.sock" debug=false pid=4065
Mar 20 21:48:10 minikube dockerd[2122]: time="2020-03-20T21:48:10.929964715Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f40908cb98ffa2315159c410766b001dcbd0674120a309e144864c21fd02dc30/shim.sock" debug=false pid=4066
Mar 20 21:48:11 minikube dockerd[2122]: time="2020-03-20T21:48:11.143459621Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8a4bbc5941fe20e0042ec57887fe7358444572b0cf7b829b1b96fa5f39b30267/shim.sock" debug=false pid=4169
Mar 20 21:48:11 minikube dockerd[2122]: time="2020-03-20T21:48:11.210362633Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4f8b7bb708823ea99fb7188e441acf71f36f1e9a2f5ea9b9f833e949172ce410/shim.sock" debug=false pid=4195

==> container status <==
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
4f8b7bb708823       70f311871ae12       29 minutes ago      Running             coredns                   0                   acf52a0976695
8a4bbc5941fe2       70f311871ae12       29 minutes ago      Running             coredns                   0                   f40908cb98ffa
12838fde2ad1f       4689081edb103       29 minutes ago      Running             storage-provisioner       0                   a0c8ebf1ca92f
a660401e61f31       ae853e93800dc       29 minutes ago      Running             kube-proxy                0                   1f1de65e350db
2445450da9e86       b0f1517c1f4bb       30 minutes ago      Running             kube-controller-manager   0                   d9cb9cb49f71d
2e10cf8029978       d109c0821a2b9       30 minutes ago      Running             kube-scheduler            0                   28e4727b769bb
d109590011eb0       303ce5db0e90d       30 minutes ago      Running             etcd                      0                   bc99a35300281
6c18e5221d8db       90d27391b7808       30 minutes ago      Running             kube-apiserver            0                   1145d28be9f37

==> coredns [4f8b7bb70882] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> coredns [8a4bbc5941fe] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

==> dmesg <==
[Mar20 21:47] ERROR: earlyprintk= earlyser already used
[  +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED
[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[  +0.100677] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xC0, should be 0x1D (20180810/tbprint-177)
[  +3.314891] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[  +0.000001] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[  +0.007405] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +1.887060] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[  +0.004621] systemd-fstab-generator[1097]: Ignoring "noauto" for root device
[  +0.004270] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000001] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.864796] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +0.560438] vboxguest: loading out-of-tree module taints kernel.
[  +0.003217] vboxguest: PCI device not found, probably running on physical hardware.
[  +1.447328] systemd-fstab-generator[1883]: Ignoring "noauto" for root device
[  +0.124852] systemd-fstab-generator[1899]: Ignoring "noauto" for root device
[  +0.122382] systemd-fstab-generator[1915]: Ignoring "noauto" for root device
[ +11.095362] kauditd_printk_skb: 65 callbacks suppressed
[  +0.739283] systemd-fstab-generator[2323]: Ignoring "noauto" for root device
[  +1.725582] systemd-fstab-generator[2532]: Ignoring "noauto" for root device
[  +8.768574] kauditd_printk_skb: 107 callbacks suppressed
[  +7.504436] systemd-fstab-generator[3531]: Ignoring "noauto" for root device
[Mar20 21:48] kauditd_printk_skb: 32 callbacks suppressed
[  +8.786075] kauditd_printk_skb: 44 callbacks suppressed
[Mar20 21:49] NFSD: Unable to end grace period: -110

==> kernel <==
 22:17:49 up 30 min,  0 users,  load average: 0.28, 0.21, 0.20
Linux minikube 4.19.94 #1 SMP Fri Mar 6 11:41:28 PST 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.9"

==> kube-apiserver [6c18e5221d8d] <==
W0320 21:47:51.697671       1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0320 21:47:51.700663       1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0320 21:47:51.710838       1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0320 21:47:51.750463       1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0320 21:47:51.750502       1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0320 21:47:51.756152       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0320 21:47:51.756181       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0320 21:47:51.757460       1 client.go:361] parsed scheme: "endpoint"
I0320 21:47:51.757494       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I0320 21:47:51.764131       1 client.go:361] parsed scheme: "endpoint"
I0320 21:47:51.764229       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]
I0320 21:47:53.319674       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0320 21:47:53.319791       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0320 21:47:53.320343       1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0320 21:47:53.320873       1 secure_serving.go:178] Serving securely on [::]:8443
I0320 21:47:53.320897       1 controller.go:81] Starting OpenAPI AggregationController
I0320 21:47:53.320904       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0320 21:47:53.321960       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0320 21:47:53.321985       1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I0320 21:47:53.322074       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0320 21:47:53.322079       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0320 21:47:53.322091       1 autoregister_controller.go:140] Starting autoregister controller
I0320 21:47:53.322094       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0320 21:47:53.322200       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0320 21:47:53.322211       1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0320 21:47:53.322435       1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0320 21:47:53.322458       1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I0320 21:47:53.322858       1 available_controller.go:386] Starting AvailableConditionController
I0320 21:47:53.322883       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0320 21:47:53.322906       1 crd_finalizer.go:263] Starting CRDFinalizer
I0320 21:47:53.323429       1 controller.go:85] Starting OpenAPI controller
I0320 21:47:53.323463       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0320 21:47:53.323474       1 naming_controller.go:288] Starting NamingConditionController
I0320 21:47:53.323483       1 establishing_controller.go:73] Starting EstablishingController
I0320 21:47:53.323515       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0320 21:47:53.323526       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
E0320 21:47:53.342699       1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.64.11, ResourceVersion: 0, AdditionalErrorMsg: 
I0320 21:47:53.424583       1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
I0320 21:47:53.424672       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0320 21:47:53.424680       1 cache.go:39] Caches are synced for autoregister controller
I0320 21:47:53.424693       1 shared_informer.go:204] Caches are synced for crd-autoregister 
I0320 21:47:53.424700       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0320 21:47:54.320476       1 controller.go:107] OpenAPI AggregationController: Processing item 
I0320 21:47:54.320746       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0320 21:47:54.321135       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0320 21:47:54.345192       1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0320 21:47:54.348603       1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0320 21:47:54.348614       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0320 21:47:54.633102       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0320 21:47:54.661241       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0320 21:47:54.709085       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.64.11]
I0320 21:47:54.709713       1 controller.go:606] quota admission added evaluator for: endpoints
I0320 21:47:55.526265       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0320 21:47:56.291641       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0320 21:47:56.302551       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0320 21:47:56.520554       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0320 21:48:04.783183       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0320 21:48:05.207641       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
E0320 22:01:25.808898       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0320 22:09:06.864784       1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted

==> kube-controller-manager [2445450da9e8] <==
I0320 21:48:04.182772       1 node_lifecycle_controller.go:520] Controller will reconcile labels.
I0320 21:48:04.182791       1 controllermanager.go:533] Started "nodelifecycle"
I0320 21:48:04.182881       1 node_lifecycle_controller.go:554] Starting node controller
I0320 21:48:04.182887       1 shared_informer.go:197] Waiting for caches to sync for taint
I0320 21:48:04.302009       1 controllermanager.go:533] Started "daemonset"
I0320 21:48:04.302115       1 daemon_controller.go:255] Starting daemon sets controller
I0320 21:48:04.302142       1 shared_informer.go:197] Waiting for caches to sync for daemon sets
I0320 21:48:04.450890       1 controllermanager.go:533] Started "csrcleaner"
I0320 21:48:04.451071       1 cleaner.go:81] Starting CSR cleaner controller
I0320 21:48:04.702921       1 controllermanager.go:533] Started "tokencleaner"
W0320 21:48:04.703049       1 controllermanager.go:525] Skipping "ttl-after-finished"
I0320 21:48:04.703876       1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0320 21:48:04.704680       1 tokencleaner.go:117] Starting token cleaner controller
I0320 21:48:04.704825       1 shared_informer.go:197] Waiting for caches to sync for token_cleaner
I0320 21:48:04.704998       1 shared_informer.go:204] Caches are synced for token_cleaner 
I0320 21:48:04.748590       1 shared_informer.go:204] Caches are synced for ReplicationController 
I0320 21:48:04.750553       1 shared_informer.go:204] Caches are synced for service account 
I0320 21:48:04.753589       1 shared_informer.go:204] Caches are synced for endpoint 
I0320 21:48:04.753905       1 shared_informer.go:204] Caches are synced for certificate-csrapproving 
I0320 21:48:04.754523       1 shared_informer.go:204] Caches are synced for PVC protection 
I0320 21:48:04.755897       1 shared_informer.go:204] Caches are synced for stateful set 
I0320 21:48:04.767435       1 shared_informer.go:204] Caches are synced for ReplicaSet 
I0320 21:48:04.778308       1 shared_informer.go:204] Caches are synced for deployment 
I0320 21:48:04.782799       1 shared_informer.go:204] Caches are synced for disruption 
I0320 21:48:04.785718       1 disruption.go:338] Sending events to api server.
I0320 21:48:04.787151       1 shared_informer.go:204] Caches are synced for namespace 
I0320 21:48:04.797243       1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"ddf9fa26-765d-42d1-af0e-13c2abdc91e4", APIVersion:"apps/v1", ResourceVersion:"174", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
I0320 21:48:04.800030       1 shared_informer.go:204] Caches are synced for certificate-csrsigning 
I0320 21:48:04.804624       1 shared_informer.go:204] Caches are synced for bootstrap_signer 
I0320 21:48:04.805322       1 shared_informer.go:204] Caches are synced for job 
I0320 21:48:04.805406       1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
I0320 21:48:04.825741       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"4609d4d0-2e46-4acd-8444-960dd1f63bf6", APIVersion:"apps/v1", ResourceVersion:"317", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-mgf57
E0320 21:48:04.864916       1 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0320 21:48:04.866929       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0320 21:48:04.867469       1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"4609d4d0-2e46-4acd-8444-960dd1f63bf6", APIVersion:"apps/v1", ResourceVersion:"317", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-l4wd7
E0320 21:48:04.885160       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0320 21:48:04.902412       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0320 21:48:04.953841       1 shared_informer.go:204] Caches are synced for PV protection 
W0320 21:48:05.106122       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="m01" does not exist
I0320 21:48:05.151238       1 shared_informer.go:204] Caches are synced for expand 
I0320 21:48:05.179156       1 shared_informer.go:204] Caches are synced for TTL 
I0320 21:48:05.183320       1 shared_informer.go:204] Caches are synced for taint 
I0320 21:48:05.183484       1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
W0320 21:48:05.183640       1 node_lifecycle_controller.go:1058] Missing timestamp for Node m01. Assuming now as a timestamp.
I0320 21:48:05.183755       1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0320 21:48:05.183949       1 taint_manager.go:186] Starting NoExecuteTaintManager
I0320 21:48:05.184679       1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"c44f99a4-dee9-4051-a5d7-b59fef6f1b1d", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node m01 event: Registered Node m01 in Controller
I0320 21:48:05.203222       1 shared_informer.go:204] Caches are synced for GC 
I0320 21:48:05.203333       1 shared_informer.go:204] Caches are synced for daemon sets 
I0320 21:48:05.203461       1 shared_informer.go:204] Caches are synced for persistent volume 
I0320 21:48:05.212128       1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"97019129-eda5-4b0e-9151-5698acfcaa32", APIVersion:"apps/v1", ResourceVersion:"181", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-gmwnm
I0320 21:48:05.227267       1 shared_informer.go:204] Caches are synced for attach detach 
I0320 21:48:05.259233       1 shared_informer.go:204] Caches are synced for resource quota 
I0320 21:48:05.301859       1 shared_informer.go:204] Caches are synced for HPA 
I0320 21:48:05.304234       1 shared_informer.go:204] Caches are synced for resource quota 
I0320 21:48:05.354783       1 shared_informer.go:204] Caches are synced for garbage collector 
I0320 21:48:05.354815       1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0320 21:48:05.652102       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0320 21:48:05.652386       1 shared_informer.go:204] Caches are synced for garbage collector 
I0320 21:48:10.184613       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.

==> kube-proxy [a660401e61f3] <==
W0320 21:48:05.868162       1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I0320 21:48:05.874273       1 node.go:135] Successfully retrieved node IP: 192.168.64.11
I0320 21:48:05.874572       1 server_others.go:145] Using iptables Proxier.
W0320 21:48:05.874762       1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0320 21:48:05.875033       1 server.go:571] Version: v1.17.3
I0320 21:48:05.875349       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0320 21:48:05.875431       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0320 21:48:05.875809       1 conntrack.go:83] Setting conntrack hashsize to 32768
I0320 21:48:05.879714       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0320 21:48:05.879881       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0320 21:48:05.880749       1 config.go:313] Starting service config controller
I0320 21:48:05.880840       1 shared_informer.go:197] Waiting for caches to sync for service config
I0320 21:48:05.880939       1 config.go:131] Starting endpoints config controller
I0320 21:48:05.881092       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0320 21:48:05.981860       1 shared_informer.go:204] Caches are synced for service config 
I0320 21:48:05.982965       1 shared_informer.go:204] Caches are synced for endpoints config 

==> kube-scheduler [2e10cf802997] <==
I0320 21:47:50.583118       1 serving.go:312] Generated self-signed cert in-memory
W0320 21:47:50.946658       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0320 21:47:50.946700       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0320 21:47:53.374940       1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0320 21:47:53.375194       1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0320 21:47:53.375247       1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
W0320 21:47:53.375335       1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
W0320 21:47:53.412210       1 authorization.go:47] Authorization is disabled
W0320 21:47:53.412393       1 authentication.go:92] Authentication is disabled
I0320 21:47:53.412494       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0320 21:47:53.414193       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0320 21:47:53.415605       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0320 21:47:53.415639       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0320 21:47:53.415657       1 tlsconfig.go:219] Starting DynamicServingCertificateController
E0320 21:47:53.429494       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0320 21:47:53.429640       1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0320 21:47:53.429867       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0320 21:47:53.430054       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0320 21:47:53.430211       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0320 21:47:53.431982       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0320 21:47:53.432601       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0320 21:47:53.432791       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0320 21:47:53.432965       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0320 21:47:53.434380       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0320 21:47:53.434478       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0320 21:47:53.434668       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0320 21:47:54.430552       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0320 21:47:54.435381       1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0320 21:47:54.438646       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0320 21:47:54.439667       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0320 21:47:54.440913       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0320 21:47:54.442323       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0320 21:47:54.443835       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0320 21:47:54.444957       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0320 21:47:54.447682       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0320 21:47:54.449682       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0320 21:47:54.450672       1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0320 21:47:54.452978       1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0320 21:47:55.516228       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0320 21:47:55.517244       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-scheduler...
I0320 21:47:55.528723       1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0320 21:47:59.217625       1 factory.go:494] pod is already present in the activeQ
E0320 21:48:04.859228       1 factory.go:494] pod is already present in the activeQ
E0320 21:48:04.881214       1 factory.go:494] pod is already present in the activeQ

==> kubelet <==
-- Logs begin at Fri 2020-03-20 21:47:23 UTC, end at Fri 2020-03-20 22:17:50 UTC. --
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.451229    3540 docker_service.go:255] Docker cri networking managed by kubernetes.io/no-op
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.458838    3540 docker_service.go:260] Docker Info: &{ID:5WF6:UPKL:RUS7:KHL3:QFXD:3SJE:EMTD:PZFZ:2KWW:RKI3:SGBS:24HH Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:76 SystemTime:2020-03-20T21:47:56.452183367Z LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:4.19.94 OperatingSystem:Buildroot 2019.02.9 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0005ced20 NCPU:2 MemTotal:4031479808 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=hyperkit] ExperimentalBuild:false ServerVersion:19.03.6 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:35bd7a5f69c13e1563af8a93431411cd9ecf5021 Expected:35bd7a5f69c13e1563af8a93431411cd9ecf5021} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[]}
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.458918    3540 docker_service.go:273] Setting cgroupDriver to systemd
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.471581    3540 remote_runtime.go:59] parsed scheme: ""
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.471595    3540 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.471618    3540 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0  <nil>}] <nil>}
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.471623    3540 clientconn.go:577] ClientConn switching balancer to "pick_first"
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.471661    3540 remote_image.go:50] parsed scheme: ""
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.471667    3540 remote_image.go:50] scheme "" not registered, fallback to default scheme
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.471673    3540 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0  <nil>}] <nil>}
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.471677    3540 clientconn.go:577] ClientConn switching balancer to "pick_first"
Mar 20 21:47:56 minikube kubelet[3540]: E0320 21:47:56.472161    3540 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Mar 20 21:47:56 minikube kubelet[3540]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.478847    3540 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.6, apiVersion: 1.40.0
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.487162    3540 server.go:1113] Started kubelet
Mar 20 21:47:56 minikube kubelet[3540]: E0320 21:47:56.488438    3540 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.490152    3540 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.491249    3540 server.go:144] Starting to listen on 0.0.0.0:10250
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.491954    3540 server.go:384] Adding debug handlers to kubelet server.
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.494703    3540 volume_manager.go:265] Starting Kubelet Volume Manager
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.495870    3540 desired_state_of_world_populator.go:138] Desired state populator starts to run
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.550331    3540 status_manager.go:157] Starting to sync pod status with apiserver
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.550502    3540 kubelet.go:1820] Starting kubelet main sync loop.
Mar 20 21:47:56 minikube kubelet[3540]: E0320 21:47:56.550574    3540 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.596491    3540 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.635828    3540 kubelet_node_status.go:70] Attempting to register node m01
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.643519    3540 kubelet_node_status.go:112] Node m01 was previously registered
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.643687    3540 kubelet_node_status.go:73] Successfully registered node m01
Mar 20 21:47:56 minikube kubelet[3540]: E0320 21:47:56.653127    3540 kubelet.go:1844] skipping pod synchronization - container runtime status check may not have completed yet
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.725905    3540 cpu_manager.go:173] [cpumanager] starting with none policy
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.726076    3540 cpu_manager.go:174] [cpumanager] reconciling every 10s
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.726120    3540 policy_none.go:43] [cpumanager] none policy: Start
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.727751    3540 plugin_manager.go:114] Starting Kubelet Plugin Manager
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.900389    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/67b7e5352c5d7693f9bfac40cd9df88f-usr-share-ca-certificates") pod "kube-controller-manager-m01" (UID: "67b7e5352c5d7693f9bfac40cd9df88f")
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.900437    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/b5032b175b132285b71376d3543957ce-etcd-certs") pod "etcd-m01" (UID: "b5032b175b132285b71376d3543957ce")
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.900453    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/b5032b175b132285b71376d3543957ce-etcd-data") pod "etcd-m01" (UID: "b5032b175b132285b71376d3543957ce")
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.900470    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/fd3a612c2f3f021dc3e874aaa46310a1-ca-certs") pod "kube-apiserver-m01" (UID: "fd3a612c2f3f021dc3e874aaa46310a1")
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.900486    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/67b7e5352c5d7693f9bfac40cd9df88f-ca-certs") pod "kube-controller-manager-m01" (UID: "67b7e5352c5d7693f9bfac40cd9df88f")
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.900499    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/67b7e5352c5d7693f9bfac40cd9df88f-kubeconfig") pod "kube-controller-manager-m01" (UID: "67b7e5352c5d7693f9bfac40cd9df88f")
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.900512    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/e3025acd90e7465e66fa19c71b916366-kubeconfig") pod "kube-scheduler-m01" (UID: "e3025acd90e7465e66fa19c71b916366")
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.900596    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/fd3a612c2f3f021dc3e874aaa46310a1-k8s-certs") pod "kube-apiserver-m01" (UID: "fd3a612c2f3f021dc3e874aaa46310a1")
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.900622    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/fd3a612c2f3f021dc3e874aaa46310a1-usr-share-ca-certificates") pod "kube-apiserver-m01" (UID: "fd3a612c2f3f021dc3e874aaa46310a1")
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.900645    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/67b7e5352c5d7693f9bfac40cd9df88f-flexvolume-dir") pod "kube-controller-manager-m01" (UID: "67b7e5352c5d7693f9bfac40cd9df88f")
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.900675    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/67b7e5352c5d7693f9bfac40cd9df88f-k8s-certs") pod "kube-controller-manager-m01" (UID: "67b7e5352c5d7693f9bfac40cd9df88f")
Mar 20 21:47:56 minikube kubelet[3540]: I0320 21:47:56.900695    3540 reconciler.go:156] Reconciler: start to sync state
Mar 20 21:48:05 minikube kubelet[3540]: I0320 21:48:05.386290    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/becea2c7-cc17-4e33-9cde-acdd73a28f64-kube-proxy") pod "kube-proxy-gmwnm" (UID: "becea2c7-cc17-4e33-9cde-acdd73a28f64")
Mar 20 21:48:05 minikube kubelet[3540]: I0320 21:48:05.386320    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/becea2c7-cc17-4e33-9cde-acdd73a28f64-xtables-lock") pod "kube-proxy-gmwnm" (UID: "becea2c7-cc17-4e33-9cde-acdd73a28f64")
Mar 20 21:48:05 minikube kubelet[3540]: I0320 21:48:05.386334    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/becea2c7-cc17-4e33-9cde-acdd73a28f64-lib-modules") pod "kube-proxy-gmwnm" (UID: "becea2c7-cc17-4e33-9cde-acdd73a28f64")
Mar 20 21:48:05 minikube kubelet[3540]: I0320 21:48:05.386347    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-njdvc" (UniqueName: "kubernetes.io/secret/becea2c7-cc17-4e33-9cde-acdd73a28f64-kube-proxy-token-njdvc") pod "kube-proxy-gmwnm" (UID: "becea2c7-cc17-4e33-9cde-acdd73a28f64")
Mar 20 21:48:05 minikube kubelet[3540]: W0320 21:48:05.672722    3540 pod_container_deletor.go:75] Container "1f1de65e350db5b58a4460de2c2a2699835798790fc627d59cd3d09327493ce9" not found in pod's containers
Mar 20 21:48:07 minikube kubelet[3540]: I0320 21:48:07.915911    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-6sqxt" (UniqueName: "kubernetes.io/secret/319c35be-72d8-44c8-8776-b622ebbc8dc4-storage-provisioner-token-6sqxt") pod "storage-provisioner" (UID: "319c35be-72d8-44c8-8776-b622ebbc8dc4")
Mar 20 21:48:07 minikube kubelet[3540]: I0320 21:48:07.915978    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/319c35be-72d8-44c8-8776-b622ebbc8dc4-tmp") pod "storage-provisioner" (UID: "319c35be-72d8-44c8-8776-b622ebbc8dc4")
Mar 20 21:48:10 minikube kubelet[3540]: I0320 21:48:10.543215    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-qzzr5" (UniqueName: "kubernetes.io/secret/669468d9-5a17-438d-809f-54ef92614b91-coredns-token-qzzr5") pod "coredns-6955765f44-mgf57" (UID: "669468d9-5a17-438d-809f-54ef92614b91")
Mar 20 21:48:10 minikube kubelet[3540]: I0320 21:48:10.543321    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/86049321-0eff-401f-b982-2eeb60a87997-config-volume") pod "coredns-6955765f44-l4wd7" (UID: "86049321-0eff-401f-b982-2eeb60a87997")
Mar 20 21:48:10 minikube kubelet[3540]: I0320 21:48:10.543360    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-qzzr5" (UniqueName: "kubernetes.io/secret/86049321-0eff-401f-b982-2eeb60a87997-coredns-token-qzzr5") pod "coredns-6955765f44-l4wd7" (UID: "86049321-0eff-401f-b982-2eeb60a87997")
Mar 20 21:48:10 minikube kubelet[3540]: I0320 21:48:10.543395    3540 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/669468d9-5a17-438d-809f-54ef92614b91-config-volume") pod "coredns-6955765f44-mgf57" (UID: "669468d9-5a17-438d-809f-54ef92614b91")
Mar 20 21:48:11 minikube kubelet[3540]: W0320 21:48:11.101230    3540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-l4wd7 through plugin: invalid network status for
Mar 20 21:48:11 minikube kubelet[3540]: W0320 21:48:11.149232    3540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-mgf57 through plugin: invalid network status for
Mar 20 21:48:11 minikube kubelet[3540]: W0320 21:48:11.718946    3540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-l4wd7 through plugin: invalid network status for
Mar 20 21:48:11 minikube kubelet[3540]: W0320 21:48:11.736254    3540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-mgf57 through plugin: invalid network status for

==> storage-provisioner [12838fde2ad1] <==

The operating system version: macOS 10.15.3

tstromberg commented 4 years ago

As best as I can tell, minikube and Kubernetes are running just fine, though the docker errors are interesting. I'm having a difficult time following along on how to replicate the error though.

If I attempt to run an application (cloud run) that starts a gcloud managed version of minkube (v1.7.3), while a local version of minikube (v1.8.2) is running and set as the current context, the sample fails and corrupts the state of docker.

Are you seeing something else other than docker errors?

My best guess as to what may be happening is that something may be caching the Docker IP and port returned by minikube, but some external environment is changing that. If you see the docker error again, can you share the output of:

Thanks for the report!

peterlin741 commented 4 years ago

Will retest when the new version of gcloud is released (pointing at the latest minikube)

tstromberg commented 4 years ago

have you bumped into this again?

peterlin741 commented 4 years ago

I haven't seen this exact issue recently (managed version on v1.8.2), although whenever minikube gets stuck, restarting docker and running minikube delete fixes it.

Our team will be doing further testing this week, so I'll comment here if the issue shows up again when using the latest version of minikube.

tstromberg commented 4 years ago

@peterlin741 - "whenever minikube gets stuck" sounds bad. Do you mind opening an issue about that? It shouldn't happen regularly at all. I would love to get --alsologtostderr output from a case where it occurs.

We've talked about offering a flag to automatically restart stuck Docker daemons, but it seems a bit invasive to do by default.

peterlin741 commented 4 years ago

@peterlin741 - "whenever minikube gets stuck" sounds bad. Do you mind opening an issue about that? It shouldn't happen regularly at all. I would love to get --alsologtostderr output from a case where it occurs.

We've talked about offering a flag to automatically restart stuck Docker daemons, but it seems a bit invasive to do by default.

Sure, I can open an issue if I see it again and include the logs