kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.15k stars 4.87k forks source link

podman: provision: open id_rsa: no such file or directory (Fedora 31) #7877

Closed mazzystr closed 4 years ago

mazzystr commented 4 years ago

Hello, I'd like some assistance with running minikube --driver=podman/none. These features are particularly interesting as community managers of KubeVirt. They will enable us to easily stand up Kubernetes clusters with KubeVirt and create more realistic demos. We have been struggling with nested virtualization limitations.

Steps to reproduce the issue:

  1. minikube start --container-runtime=cri-o --network-plugin=cni --enable-default-cni --driver=podman --alsologtostderr

Begin logs....

# minikube start --container-runtime=cri-o --network-plugin=cni --enable-default-cni --driver=none --alsologtostderr
I0423 15:31:06.946615   46416 notify.go:125] Checking for updates...
I0423 15:31:07.194223   46416 start.go:262] hostinfo: {"hostname":"cube0","uptime":101328,"bootTime":1587579739,"procs":201,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"31","kernelVersion":"5.5.17-200.fc31.x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"41fabaa5-f442-11e6-9c43-bc00002a0000"}
I0423 15:31:07.195294   46416 start.go:272] virtualization: kvm host
😄  minikube v1.9.2 on Fedora 31
I0423 15:31:07.195504   46416 driver.go:245] Setting default libvirt URI to qemu:///system
✨  Using the none driver based on user configuration
I0423 15:31:07.195837   46416 start.go:310] selected driver: none
I0423 15:31:07.195863   46416 start.go:656] validating driver "none" against <nil>
I0423 15:31:07.195888   46416 start.go:662] status for none: {Installed:false Healthy:false Error:exec: "docker": executable file not found in $PATH Fix:Install docker Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/none/}

❗  'none' driver reported an issue: exec: "docker": executable file not found in $PATH
💡  Suggestion: Install docker
📘  Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/none/

💣  none does not appear to be installed
[root@cube0 ~]# cat log
I0423 15:11:42.834119   45663 notify.go:125] Checking for updates...
I0423 15:11:42.987857   45663 start.go:262] hostinfo: {"hostname":"cube0","uptime":100163,"bootTime":1587579739,"procs":218,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"31","kernelVersion":"5.5.17-200.fc31.x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"41fabaa5-f442-11e6-9c43-bc00002a0000"}
I0423 15:11:42.988908   45663 start.go:272] virtualization: kvm host
* minikube v1.9.2 on Fedora 31
I0423 15:11:42.990711   45663 driver.go:245] Setting default libvirt URI to qemu:///system
* Using the podman (experimental) driver based on existing profile
I0423 15:11:43.114732   45663 start.go:310] selected driver: podman
I0423 15:11:43.114738   45663 start.go:656] validating driver "podman" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:2200 CPUs:2 DiskSize:20000 Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri-o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0423 15:11:43.114807   45663 start.go:662] status for podman: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
W0423 15:11:43.194260   45663 start.go:993] Unable to query memory limits: get podman system info: exit status 125
I0423 15:11:43.194518   45663 start.go:1004] Using suggested 2200MB memory alloc based on sys=-1MB, container=-1MB
I0423 15:11:43.194613   45663 start.go:1210] Wait components to verify : map[apiserver:true system_pods:true]
* Starting control plane node m01 in cluster minikube
* Pulling base image ...
I0423 15:11:43.194703   45663 cache.go:104] Beginning downloading kic artifacts
I0423 15:11:43.194712   45663 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime cri-o
I0423 15:11:43.194721   45663 preload.go:90] Container runtime isn't docker, skipping preload
I0423 15:11:43.194756   45663 cache.go:106] Downloading gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0423 15:11:43.194775   45663 image.go:84] Writing gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0423 15:11:43.194810   45663 profile.go:138] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0423 15:11:43.194911   45663 cache.go:92] acquiring lock: {Name:mk168fab812ead9a0f93a7ad5f3036835fc98487 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0423 15:11:43.194936   45663 cache.go:92] acquiring lock: {Name:mk6c329bd4433c492bb9d263ba52abe0aea26cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0423 15:11:43.195049   45663 cache.go:92] acquiring lock: {Name:mka633424a601eb932313bb30ba72be3580830e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0423 15:11:43.195075   45663 cache.go:100] /root/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2 exists
I0423 15:11:43.195090   45663 cache.go:81] cache image "kubernetesui/metrics-scraper:v1.0.2" -> "/root/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2" took 188.896µs
I0423 15:11:43.195102   45663 cache.go:66] save to tar file kubernetesui/metrics-scraper:v1.0.2 -> /root/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2 succeeded
I0423 15:11:43.195106   45663 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
I0423 15:11:43.195255   45663 cache.go:81] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 127.274µs
I0423 15:11:43.195278   45663 cache.go:92] acquiring lock: {Name:mk9476e32141ededef90a8e7a94bff4bfaae8f32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0423 15:11:43.195293   45663 cache.go:66] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
I0423 15:11:43.195313   45663 cache.go:92] acquiring lock: {Name:mk1c7eedd589b573c53fbb9d40315048d2adbe66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0423 15:11:43.195385   45663 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
I0423 15:11:43.195406   45663 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
I0423 15:11:43.195398   45663 cache.go:81] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 134.29µs
I0423 15:11:43.195421   45663 cache.go:66] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
I0423 15:11:43.195424   45663 cache.go:81] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 499.484µs
I0423 15:11:43.195433   45663 cache.go:66] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
I0423 15:11:43.195435   45663 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
I0423 15:11:43.195433   45663 cache.go:92] acquiring lock: {Name:mkdf267729a9e25d7bf5de5666d3628a22a503b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0423 15:11:43.195449   45663 cache.go:81] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/root/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 339.347µs
I0423 15:11:43.195456   45663 cache.go:66] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /root/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
I0423 15:11:43.195446   45663 cache.go:92] acquiring lock: {Name:mke662c7b488e76a9a92a462a5b2771572d13f6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0423 15:11:43.195470   45663 cache.go:92] acquiring lock: {Name:mkc807c44b369e90343e9488225483f5aa6c2012 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0423 15:11:43.195498   45663 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
I0423 15:11:43.195509   45663 cache.go:81] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 81.394µs
I0423 15:11:43.195515   45663 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
I0423 15:11:43.195516   45663 cache.go:66] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
I0423 15:11:43.195527   45663 cache.go:81] cache image "k8s.gcr.io/coredns:1.6.7" -> "/root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 87.519µs
I0423 15:11:43.195532   45663 cache.go:100] /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 exists
I0423 15:11:43.195534   45663 cache.go:66] save to tar file k8s.gcr.io/coredns:1.6.7 -> /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
I0423 15:11:43.195531   45663 cache.go:92] acquiring lock: {Name:mk29fd902d86257218148d97897c3b5ba6ce3f62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0423 15:11:43.195545   45663 cache.go:81] cache image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" -> "/root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1" took 80.426µs
I0423 15:11:43.195558   45663 cache.go:66] save to tar file gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 succeeded
I0423 15:11:43.195548   45663 cache.go:92] acquiring lock: {Name:mk357d5208d2273c11ae55464def14a90c05f5e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0423 15:11:43.195588   45663 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
I0423 15:11:43.195598   45663 cache.go:81] cache image "k8s.gcr.io/pause:3.2" -> "/root/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 71.842µs
I0423 15:11:43.195610   45663 cache.go:66] save to tar file k8s.gcr.io/pause:3.2 -> /root/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
I0423 15:11:43.195616   45663 cache.go:100] /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-rc6 exists
I0423 15:11:43.195637   45663 cache.go:81] cache image "kubernetesui/dashboard:v2.0.0-rc6" -> "/root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-rc6" took 92.961µs
I0423 15:11:43.195644   45663 cache.go:66] save to tar file kubernetesui/dashboard:v2.0.0-rc6 -> /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-rc6 succeeded
I0423 15:11:43.195659   45663 cache.go:73] Successfully saved all images to host disk.
E0423 15:11:43.466465   45663 cache.go:114] Error downloading kic artifacts:  error loading image: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0423 15:11:43.466910   45663 start.go:260] acquiring machines lock for minikube: {Name:mka00e65579c2b557a802898fd1cf03ec4ab30a1 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0423 15:11:43.467105   45663 start.go:264] acquired machines lock for "minikube" in 149.948µs
I0423 15:11:43.467149   45663 start.go:90] Skipping create...Using existing machine configuration
I0423 15:11:43.467161   45663 fix.go:53] fixHost starting: m01
I0423 15:11:43.467665   45663 oci.go:250] executing with [podman inspect -f {{.State.Status}} minikube] timeout: 19s
I0423 15:11:43.559205   45663 fix.go:105] recreateIfNeeded on minikube: state=Running err=<nil>
W0423 15:11:43.559231   45663 fix.go:130] unexpected machine state, will restart: <nil>
* Updating the running podman "minikube" container ...
I0423 15:11:43.559300   45663 machine.go:86] provisioning docker machine ...
I0423 15:11:43.559313   45663 ubuntu.go:166] provisioning hostname "minikube"
I0423 15:11:43.622640   45663 main.go:110] libmachine: Using SSH client type: native
I0423 15:11:43.622675   45663 main.go:110] libmachine: <nil>
I0423 15:11:43.622682   45663 machine.go:89] provisioned docker machine in 63.377464ms
I0423 15:11:43.622697   45663 fix.go:55] fixHost completed within 155.5398ms
I0423 15:11:43.622702   45663 start.go:77] releasing machines lock for "minikube", held for 155.575813ms
! StartHost failed, but will try again: provision: Error getting config for native Go SSH: open /root/.minikube/machines/minikube/id_rsa: no such file or directory
I0423 15:11:48.623782   45663 start.go:260] acquiring machines lock for minikube: {Name:mka00e65579c2b557a802898fd1cf03ec4ab30a1 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0423 15:11:48.624083   45663 start.go:264] acquired machines lock for "minikube" in 222.425µs
I0423 15:11:48.624131   45663 start.go:90] Skipping create...Using existing machine configuration
I0423 15:11:48.624144   45663 fix.go:53] fixHost starting: m01
I0423 15:11:48.624693   45663 oci.go:250] executing with [podman inspect -f {{.State.Status}} minikube] timeout: 19s
I0423 15:11:48.691685   45663 fix.go:105] recreateIfNeeded on minikube: state=Running err=<nil>
W0423 15:11:48.691707   45663 fix.go:130] unexpected machine state, will restart: <nil>
* Updating the running podman "minikube" container ...
I0423 15:11:48.691763   45663 machine.go:86] provisioning docker machine ...
I0423 15:11:48.691777   45663 ubuntu.go:166] provisioning hostname "minikube"
I0423 15:11:48.755633   45663 main.go:110] libmachine: Using SSH client type: native
I0423 15:11:48.755664   45663 main.go:110] libmachine: <nil>
I0423 15:11:48.755671   45663 machine.go:89] provisioned docker machine in 63.902359ms
I0423 15:11:48.755686   45663 fix.go:55] fixHost completed within 131.546365ms
I0423 15:11:48.755696   45663 start.go:77] releasing machines lock for "minikube", held for 131.5904ms
W0423 15:11:48.755783   45663 exit.go:101] Failed to start podman container. "minikube start" may fix it.: provision: Error getting config for native Go SSH: open /root/.minikube/machines/minikube/id_rsa: no such file or directory
*
X Failed to start podman container. "minikube start" may fix it.: provision: Error getting config for native Go SSH: open /root/.minikube/machines/minikube/id_rsa: no such file or directory
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
  - https://github.com/kubernetes/minikube/issues/new/choose
#
tstromberg commented 4 years ago

Hey @mazzystr - We'd love to help with your use case! A few things to clarify:

minikube seems a little confused about the podman host here, but this may be a red herring:

I0423 15:11:43.467149   45663 start.go:90] Skipping create...Using existing machine configuration
I0423 15:11:43.467161   45663 fix.go:53] fixHost starting: m01
I0423 15:11:43.467665   45663 oci.go:250] executing with [podman inspect -f {{.State.Status}} minikube] timeout: 19s
I0423 15:11:43.559205   45663 fix.go:105] recreateIfNeeded on minikube: state=Running err=<nil>
W0423 15:11:43.559231   45663 fix.go:130] unexpected machine state, will restart: <nil>

This is the bad news:

StartHost failed, but will try again: provision: Error getting config for native Go SSH: open /root/.minikube/machines/minikube/id_rsa: no such file or directory

Does /root/.minikube/machines/minikube/id_rsa exist on this system?

The error message is being raised here:

https://github.com/machine-drivers/machine/blob/41eb826190d8c1e80b1b18e252f14c4996ac9a52/libmachine/ssh/client.go#L121

by way of:

https://github.com/kubernetes/minikube/blob/d845c5de3ebc94c9ec0a6f3d7837bb73bd0d83e0/pkg/provision/ubuntu.go#L167

I can't honestly make heads or tails of it, so I'm going to try and fire up a Fedora 31 image of my own to see how it looks. Thank you for the bug report!

mazzystr commented 4 years ago

Oh for godssake... I've been reimaging after every install attempt and forgetting to recreate an ssh key. Doing so gets me further along but still fails....

# minikube start --container-runtime=cri-o --network-plugin=cni --enable-default-cni --driver=podman --alsologtostderr --v=5
I0424 08:50:16.420649   58924 notify.go:125] Checking for updates...
I0424 08:50:16.749939   58924 start.go:262] hostinfo: {"hostname":"cube0","uptime":163677,"bootTime":1587579739,"procs":211,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"31","kernelVersion":"5.5.17-200.fc31.x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"41fabaa5-f442-11e6-9c43-bc00002a0000"}
I0424 08:50:16.750949   58924 start.go:272] virtualization: kvm host
😄  minikube v1.9.2 on Fedora 31
I0424 08:50:16.751159   58924 driver.go:245] Setting default libvirt URI to qemu:///system
✨  Using the podman (experimental) driver based on user configuration
I0424 08:50:16.890809   58924 start.go:310] selected driver: podman
I0424 08:50:16.890816   58924 start.go:656] validating driver "podman" against <nil>
I0424 08:50:16.890824   58924 start.go:662] status for podman: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
W0424 08:50:16.966156   58924 start.go:993] Unable to query memory limits: get podman system info: exit status 125
I0424 08:50:16.966395   58924 start.go:1004] Using suggested 2200MB memory alloc based on sys=-1MB, container=-1MB
I0424 08:50:16.966484   58924 start.go:1210] Wait components to verify : map[apiserver:true system_pods:true]
👍  Starting control plane node m01 in cluster minikube
🚜  Pulling base image ...
I0424 08:50:16.966575   58924 cache.go:104] Beginning downloading kic artifacts
I0424 08:50:16.966587   58924 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime cri-o
I0424 08:50:16.966595   58924 preload.go:90] Container runtime isn't docker, skipping preload
I0424 08:50:16.966648   58924 cache.go:106] Downloading gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0424 08:50:16.966668   58924 image.go:84] Writing gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0424 08:50:16.966792   58924 profile.go:138] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0424 08:50:16.966877   58924 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/config.json: {Name:mk270d1b5db5965f2dc9e9e25770a63417031943 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0424 08:50:16.967023   58924 cache.go:92] acquiring lock: {Name:mk168fab812ead9a0f93a7ad5f3036835fc98487 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0424 08:50:16.967190   58924 cache.go:100] /root/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2 exists
I0424 08:50:16.967189   58924 cache.go:92] acquiring lock: {Name:mke662c7b488e76a9a92a462a5b2771572d13f6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0424 08:50:16.967210   58924 cache.go:81] cache image "kubernetesui/metrics-scraper:v1.0.2" -> "/root/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2" took 193.672µs
I0424 08:50:16.967220   58924 cache.go:66] save to tar file kubernetesui/metrics-scraper:v1.0.2 -> /root/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2 succeeded
I0424 08:50:16.967233   58924 cache.go:92] acquiring lock: {Name:mk1c7eedd589b573c53fbb9d40315048d2adbe66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0424 08:50:16.967267   58924 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 exists
I0424 08:50:16.967279   58924 cache.go:81] cache image "k8s.gcr.io/coredns:1.6.7" -> "/root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7" took 98.249µs
I0424 08:50:16.967288   58924 cache.go:66] save to tar file k8s.gcr.io/coredns:1.6.7 -> /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 succeeded
I0424 08:50:16.967302   58924 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 exists
I0424 08:50:16.967300   58924 cache.go:92] acquiring lock: {Name:mka633424a601eb932313bb30ba72be3580830e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0424 08:50:16.967314   58924 cache.go:81] cache image "k8s.gcr.io/etcd:3.4.3-0" -> "/root/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" took 86.721µs
I0424 08:50:16.967322   58924 cache.go:66] save to tar file k8s.gcr.io/etcd:3.4.3-0 -> /root/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 succeeded
I0424 08:50:16.967332   58924 cache.go:92] acquiring lock: {Name:mk29fd902d86257218148d97897c3b5ba6ce3f62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0424 08:50:16.967365   58924 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 exists
I0424 08:50:16.967376   58924 cache.go:81] cache image "k8s.gcr.io/kube-proxy:v1.18.0" -> "/root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0" took 81.406µs
I0424 08:50:16.967384   58924 cache.go:66] save to tar file k8s.gcr.io/kube-proxy:v1.18.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 succeeded
I0424 08:50:16.967394   58924 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
I0424 08:50:16.967400   58924 cache.go:92] acquiring lock: {Name:mk6c329bd4433c492bb9d263ba52abe0aea26cca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0424 08:50:16.967412   58924 cache.go:81] cache image "k8s.gcr.io/pause:3.2" -> "/root/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 77.367µs
I0424 08:50:16.967420   58924 cache.go:66] save to tar file k8s.gcr.io/pause:3.2 -> /root/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
I0424 08:50:16.967431   58924 cache.go:92] acquiring lock: {Name:mkc807c44b369e90343e9488225483f5aa6c2012 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0424 08:50:16.967464   58924 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 exists
I0424 08:50:16.967475   58924 cache.go:81] cache image "k8s.gcr.io/kube-scheduler:v1.18.0" -> "/root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0" took 83.446µs
I0424 08:50:16.967483   58924 cache.go:66] save to tar file k8s.gcr.io/kube-scheduler:v1.18.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 succeeded
I0424 08:50:16.967493   58924 cache.go:100] /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 exists
I0424 08:50:16.967493   58924 cache.go:92] acquiring lock: {Name:mk9476e32141ededef90a8e7a94bff4bfaae8f32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0424 08:50:16.967505   58924 cache.go:81] cache image "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" -> "/root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1" took 78.091µs
I0424 08:50:16.967513   58924 cache.go:66] save to tar file gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 succeeded
I0424 08:50:16.967525   58924 cache.go:92] acquiring lock: {Name:mk357d5208d2273c11ae55464def14a90c05f5e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0424 08:50:16.967569   58924 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 exists
I0424 08:50:16.967579   58924 cache.go:81] cache image "k8s.gcr.io/kube-controller-manager:v1.18.0" -> "/root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0" took 90.573µs
I0424 08:50:16.967586   58924 cache.go:100] /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-rc6 exists
I0424 08:50:16.967598   58924 cache.go:81] cache image "kubernetesui/dashboard:v2.0.0-rc6" -> "/root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-rc6" took 76.794µs
I0424 08:50:16.967607   58924 cache.go:66] save to tar file kubernetesui/dashboard:v2.0.0-rc6 -> /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-rc6 succeeded
I0424 08:50:16.967587   58924 cache.go:66] save to tar file k8s.gcr.io/kube-controller-manager:v1.18.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 succeeded
I0424 08:50:16.967595   58924 cache.go:92] acquiring lock: {Name:mkdf267729a9e25d7bf5de5666d3628a22a503b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0424 08:50:16.967679   58924 cache.go:100] /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 exists
I0424 08:50:16.967692   58924 cache.go:81] cache image "k8s.gcr.io/kube-apiserver:v1.18.0" -> "/root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0" took 100.185µs
I0424 08:50:16.967704   58924 cache.go:66] save to tar file k8s.gcr.io/kube-apiserver:v1.18.0 -> /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 succeeded
I0424 08:50:16.967713   58924 cache.go:73] Successfully saved all images to host disk.
E0424 08:50:17.457163   58924 cache.go:114] Error downloading kic artifacts:  error loading image: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:17.457526   58924 start.go:260] acquiring machines lock for minikube: {Name:mka00e65579c2b557a802898fd1cf03ec4ab30a1 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0424 08:50:17.457804   58924 start.go:264] acquired machines lock for "minikube" in 227.316µs
I0424 08:50:17.457862   58924 start.go:86] Provisioning new machine with config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:2200 CPUs:2 DiskSize:20000 Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri-o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]} {Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}
I0424 08:50:17.457983   58924 start.go:107] createHost starting for "m01" (driver="podman")
I0424 08:50:17.540274   58924 start.go:143] libmachine.API.Create for "minikube" (driver="podman")
I0424 08:50:17.540305   58924 client.go:169] LocalClient.Create starting
I0424 08:50:17.540336   58924 main.go:110] libmachine: Reading certificate data from /root/.minikube/certs/ca.pem
I0424 08:50:17.540367   58924 main.go:110] libmachine: Decoding PEM data...
I0424 08:50:17.540385   58924 main.go:110] libmachine: Parsing certificate...
I0424 08:50:17.540478   58924 main.go:110] libmachine: Reading certificate data from /root/.minikube/certs/cert.pem
I0424 08:50:17.540499   58924 main.go:110] libmachine: Decoding PEM data...
I0424 08:50:17.540511   58924 main.go:110] libmachine: Parsing certificate...
I0424 08:50:17.540830   58924 oci.go:250] executing with [podman ps -a --format {{.Names}}] timeout: 30s
I0424 08:50:18.378786   58924 oci.go:250] executing with [podman inspect minikube --format={{.State.Status}}] timeout: 19s
I0424 08:50:18.473808   58924 oci.go:160] the created container "minikube" has a running status.
I0424 08:50:18.473833   58924 kic.go:142] Creating ssh key for kic: /root/.minikube/machines/minikube/id_rsa...
I0424 08:50:18.820388   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0424 08:50:19.206814   58924 kic_runner.go:91] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0424 08:50:19.504330   58924 preload.go:81] Checking if preload exists for k8s version  and runtime
I0424 08:50:19.504398   58924 preload.go:90] Container runtime isn't docker, skipping preload
I0424 08:50:19.504446   58924 oci.go:250] executing with [podman inspect -f {{.State.Status}} minikube] timeout: 19s
I0424 08:50:19.618653   58924 machine.go:86] provisioning docker machine ...
I0424 08:50:19.618710   58924 ubuntu.go:166] provisioning hostname "minikube"
I0424 08:50:19.717597   58924 main.go:110] libmachine: Using SSH client type: native
I0424 08:50:19.717813   58924 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 44027 <nil> <nil>}
I0424 08:50:19.717833   58924 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0424 08:50:19.849490   58924 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0424 08:50:19.938436   58924 main.go:110] libmachine: Using SSH client type: native
I0424 08:50:19.938590   58924 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 44027 <nil> <nil>}
I0424 08:50:19.938611   58924 main.go:110] libmachine: About to run SSH command:

        if ! grep -xq '.*\sminikube' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
            else
                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
            fi
        fi
I0424 08:50:20.068447   58924 main.go:110] libmachine: SSH cmd err, output: <nil>:
I0424 08:50:20.068517   58924 ubuntu.go:172] set auth options {CertDir:/root/.minikube CaCertPath:/root/.minikube/certs/ca.pem CaPrivateKeyPath:/root/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/root/.minikube/machines/server.pem ServerKeyPath:/root/.minikube/machines/server-key.pem ClientKeyPath:/root/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/root/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/root/.minikube}
I0424 08:50:20.068587   58924 ubuntu.go:174] setting up certificates
I0424 08:50:20.068678   58924 provision.go:83] configureAuth start
I0424 08:50:20.157010   58924 provision.go:132] copyHostCerts
I0424 08:50:20.157039   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/certs/ca.pem -> /root/.minikube/ca.pem
I0424 08:50:20.157212   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/certs/cert.pem -> /root/.minikube/cert.pem
I0424 08:50:20.157315   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/certs/key.pem -> /root/.minikube/key.pem
I0424 08:50:20.157404   58924 provision.go:106] generating server cert: /root/.minikube/machines/server.pem ca-key=/root/.minikube/certs/ca.pem private-key=/root/.minikube/certs/ca-key.pem org=root.minikube san=[10.88.0.6 localhost 127.0.0.1]
I0424 08:50:20.307269   58924 provision.go:160] copyRemoteCerts
I0424 08:50:20.398422   58924 ssh_runner.go:101] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0424 08:50:20.458922   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0424 08:50:20.465931   58924 ssh_runner.go:155] Checked if /etc/docker/ca.pem exists, but got error: Process exited with status 1
I0424 08:50:20.466221   58924 ssh_runner.go:174] Transferring 1029 bytes to /etc/docker/ca.pem
I0424 08:50:20.467367   58924 ssh_runner.go:193] ca.pem: copied 1029 bytes
I0424 08:50:20.511857   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/machines/server.pem -> /etc/docker/server.pem
I0424 08:50:20.518798   58924 ssh_runner.go:155] Checked if /etc/docker/server.pem exists, but got error: Process exited with status 1
I0424 08:50:20.519114   58924 ssh_runner.go:174] Transferring 1115 bytes to /etc/docker/server.pem
I0424 08:50:20.520184   58924 ssh_runner.go:193] server.pem: copied 1115 bytes
I0424 08:50:20.555506   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0424 08:50:20.559230   58924 ssh_runner.go:155] Checked if /etc/docker/server-key.pem exists, but got error: Process exited with status 1
I0424 08:50:20.559487   58924 ssh_runner.go:174] Transferring 1679 bytes to /etc/docker/server-key.pem
I0424 08:50:20.560052   58924 ssh_runner.go:193] server-key.pem: copied 1679 bytes
I0424 08:50:20.583417   58924 provision.go:86] configureAuth took 514.714394ms
I0424 08:50:20.583452   58924 ubuntu.go:190] setting minikube options for container-runtime
I0424 08:50:20.656635   58924 main.go:110] libmachine: Using SSH client type: native
I0424 08:50:20.656770   58924 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7bf5d0] 0x7bf5a0 <nil>  [] 0s} 127.0.0.1 44027 <nil> <nil>}
I0424 08:50:20.656789   58924 main.go:110] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube
I0424 08:50:20.786812   58924 main.go:110] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

I0424 08:50:20.786901   58924 machine.go:89] provisioned docker machine in 1.168180091s
I0424 08:50:20.786930   58924 client.go:172] LocalClient.Create took 3.246615587s
I0424 08:50:20.786966   58924 start.go:148] libmachine.API.Create for "minikube" took 3.246688022s
I0424 08:50:20.786983   58924 start.go:189] post-start starting for "minikube" (driver="podman")
I0424 08:50:20.786996   58924 start.go:199] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0424 08:50:20.787022   58924 start.go:223] determining appropriate runner for "podman"
I0424 08:50:20.787039   58924 start.go:234] Returning KICRunner for "podman" driver
I0424 08:50:20.787159   58924 kic_runner.go:91] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0424 08:50:21.079104   58924 filesync.go:118] Scanning /root/.minikube/addons for local assets ...
I0424 08:50:21.079250   58924 filesync.go:118] Scanning /root/.minikube/files for local assets ...
I0424 08:50:21.079307   58924 start.go:192] post-start completed in 292.310047ms
I0424 08:50:21.079928   58924 start.go:110] createHost completed in 3.621921544s
I0424 08:50:21.079958   58924 start.go:77] releasing machines lock for "minikube", held for 3.62211763s
I0424 08:50:21.219841   58924 profile.go:138] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0424 08:50:21.220050   58924 kic_runner.go:91] Run: curl -sS -m 2 https://k8s.gcr.io/
I0424 08:50:21.220294   58924 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd
I0424 08:50:21.610766   58924 kic_runner.go:91] Run: sudo systemctl stop -f containerd
I0424 08:50:21.925061   58924 kic_runner.go:91] Run: sudo systemctl is-active --quiet service containerd
I0424 08:50:22.234472   58924 kic_runner.go:91] Run: sudo systemctl is-active --quiet service docker
I0424 08:50:22.559945   58924 kic_runner.go:91] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
image-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0424 08:50:22.879688   58924 kic_runner.go:91] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.2"|' -i /etc/crio/crio.conf"
I0424 08:50:23.178977   58924 kic_runner.go:91] Run: sudo sysctl net.netfilter.nf_conntrack_count
I0424 08:50:23.481921   58924 kic_runner.go:91] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0424 08:50:23.810113   58924 kic_runner.go:91] Run: sudo systemctl restart crio
I0424 08:50:24.517382   58924 kic_runner.go:91] Run: crio --version
🎁  Preparing Kubernetes v1.18.0 on CRI-O 1.17.0 ...
I0424 08:50:24.926743   58924 certs.go:51] Setting up /root/.minikube/profiles/minikube for IP: 10.88.0.6
I0424 08:50:24.926830   58924 certs.go:169] skipping minikubeCA CA generation: /root/.minikube/ca.key
I0424 08:50:24.926834   58924 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime cri-o
I0424 08:50:24.926879   58924 certs.go:169] skipping proxyClientCA CA generation: /root/.minikube/proxy-client-ca.key
I0424 08:50:24.926897   58924 preload.go:90] Container runtime isn't docker, skipping preload
I0424 08:50:24.926985   58924 certs.go:267] generating minikube-user signed cert: /root/.minikube/profiles/minikube/client.key
I0424 08:50:24.927016   58924 crypto.go:69] Generating cert /root/.minikube/profiles/minikube/client.crt with IP's: []
I0424 08:50:24.927027   58924 kic_runner.go:91] Run: docker images --format {{.Repository}}:{{.Tag}}
I0424 08:50:25.113804   58924 crypto.go:157] Writing cert to /root/.minikube/profiles/minikube/client.crt ...
I0424 08:50:25.113823   58924 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/client.crt: {Name:mk09878e812b07af637940656ec44996daba95aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0424 08:50:25.114481   58924 crypto.go:165] Writing key to /root/.minikube/profiles/minikube/client.key ...
I0424 08:50:25.114497   58924 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/client.key: {Name:mkf3b978f9858871583d8228f83a87a85b7d106f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0424 08:50:25.114629   58924 certs.go:267] generating minikube signed cert: /root/.minikube/profiles/minikube/apiserver.key.aebf7175
I0424 08:50:25.114638   58924 crypto.go:69] Generating cert /root/.minikube/profiles/minikube/apiserver.crt.aebf7175 with IP's: [10.88.0.6 10.96.0.1 127.0.0.1 10.0.0.1]
I0424 08:50:25.206573   58924 cache_images.go:73] LoadImages start: [k8s.gcr.io/kube-proxy:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0 k8s.gcr.io/coredns:1.6.7 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/pause:3.2 gcr.io/k8s-minikube/storage-provisioner:v1.8.1 kubernetesui/dashboard:v2.0.0-rc6 kubernetesui/metrics-scraper:v1.0.2]
I0424 08:50:25.206960   58924 image.go:53] couldn't find image digest k8s.gcr.io/coredns:1.6.7 from local daemon: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.206986   58924 image.go:112] retrieving image: k8s.gcr.io/coredns:1.6.7
I0424 08:50:25.207018   58924 image.go:53] couldn't find image digest k8s.gcr.io/pause:3.2 from local daemon: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.207122   58924 image.go:112] retrieving image: k8s.gcr.io/pause:3.2
I0424 08:50:25.207222   58924 image.go:53] couldn't find image digest k8s.gcr.io/kube-controller-manager:v1.18.0 from local daemon: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.207238   58924 image.go:112] retrieving image: k8s.gcr.io/kube-controller-manager:v1.18.0
I0424 08:50:25.207286   58924 image.go:53] couldn't find image digest k8s.gcr.io/etcd:3.4.3-0 from local daemon: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.207304   58924 image.go:112] retrieving image: k8s.gcr.io/etcd:3.4.3-0
I0424 08:50:25.207347   58924 image.go:120] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.18.0: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.207407   58924 image.go:120] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.207458   58924 image.go:120] daemon lookup for k8s.gcr.io/pause:3.2: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.207594   58924 image.go:53] couldn't find image digest k8s.gcr.io/kube-apiserver:v1.18.0 from local daemon: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.207609   58924 image.go:112] retrieving image: k8s.gcr.io/kube-apiserver:v1.18.0
I0424 08:50:25.207712   58924 image.go:53] couldn't find image digest gcr.io/k8s-minikube/storage-provisioner:v1.8.1 from local daemon: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.207765   58924 image.go:112] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
I0424 08:50:25.207726   58924 image.go:120] daemon lookup for k8s.gcr.io/kube-apiserver:v1.18.0: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.207107   58924 image.go:120] daemon lookup for k8s.gcr.io/coredns:1.6.7: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.208125   58924 image.go:120] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v1.8.1: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.207144   58924 image.go:53] couldn't find image digest k8s.gcr.io/kube-proxy:v1.18.0 from local daemon: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.208164   58924 image.go:112] retrieving image: k8s.gcr.io/kube-proxy:v1.18.0
I0424 08:50:25.207780   58924 image.go:53] couldn't find image digest k8s.gcr.io/kube-scheduler:v1.18.0 from local daemon: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.208223   58924 image.go:112] retrieving image: k8s.gcr.io/kube-scheduler:v1.18.0
I0424 08:50:25.206961   58924 image.go:53] couldn't find image digest kubernetesui/metrics-scraper:v1.0.2 from local daemon: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.208299   58924 image.go:120] daemon lookup for k8s.gcr.io/kube-proxy:v1.18.0: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.208313   58924 image.go:112] retrieving image: kubernetesui/metrics-scraper:v1.0.2
I0424 08:50:25.208336   58924 image.go:120] daemon lookup for k8s.gcr.io/kube-scheduler:v1.18.0: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.207970   58924 image.go:53] couldn't find image digest kubernetesui/dashboard:v2.0.0-rc6 from local daemon: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.208455   58924 image.go:112] retrieving image: kubernetesui/dashboard:v2.0.0-rc6
I0424 08:50:25.208461   58924 image.go:120] daemon lookup for kubernetesui/metrics-scraper:v1.0.2: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.208574   58924 image.go:120] daemon lookup for kubernetesui/dashboard:v2.0.0-rc6: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0424 08:50:25.400550   58924 crypto.go:157] Writing cert to /root/.minikube/profiles/minikube/apiserver.crt.aebf7175 ...
I0424 08:50:25.400583   58924 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/apiserver.crt.aebf7175: {Name:mk035ccaed47ef5ebb1beba7e7d9ea863e3fd018 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0424 08:50:25.400891   58924 crypto.go:165] Writing key to /root/.minikube/profiles/minikube/apiserver.key.aebf7175 ...
I0424 08:50:25.400916   58924 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/apiserver.key.aebf7175: {Name:mk1abfd1ff4ed0d63a8afa33dfbd4b6f988b94dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0424 08:50:25.401091   58924 certs.go:278] copying /root/.minikube/profiles/minikube/apiserver.crt.aebf7175 -> /root/.minikube/profiles/minikube/apiserver.crt
I0424 08:50:25.401183   58924 certs.go:282] copying /root/.minikube/profiles/minikube/apiserver.key.aebf7175 -> /root/.minikube/profiles/minikube/apiserver.key
I0424 08:50:25.401269   58924 certs.go:267] generating aggregator signed cert: /root/.minikube/profiles/minikube/proxy-client.key
I0424 08:50:25.401277   58924 crypto.go:69] Generating cert /root/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I0424 08:50:25.555631   58924 kic_runner.go:91] Run: sudo podman inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v1.8.1
I0424 08:50:25.697099   58924 crypto.go:157] Writing cert to /root/.minikube/profiles/minikube/proxy-client.crt ...
I0424 08:50:25.697123   58924 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/proxy-client.crt: {Name:mkcab3ddb18cd096d978df14d87a44e804896057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0424 08:50:25.697358   58924 crypto.go:165] Writing key to /root/.minikube/profiles/minikube/proxy-client.key ...
I0424 08:50:25.697370   58924 lock.go:35] WriteFile acquiring /root/.minikube/profiles/minikube/proxy-client.key: {Name:mkaff5bf6f623f02423597918f5f33c2a99a3db1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0424 08:50:25.697506   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0424 08:50:25.697525   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0424 08:50:25.697539   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0424 08:50:25.697551   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0424 08:50:25.697569   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0424 08:50:25.697582   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0424 08:50:25.697596   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0424 08:50:25.697607   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0424 08:50:25.697662   58924 certs.go:330] found cert: ca-key.pem (1679 bytes)
I0424 08:50:25.697706   58924 certs.go:330] found cert: ca.pem (1029 bytes)
I0424 08:50:25.697731   58924 certs.go:330] found cert: cert.pem (1070 bytes)
I0424 08:50:25.697759   58924 certs.go:330] found cert: key.pem (1679 bytes)
I0424 08:50:25.697787   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0424 08:50:25.698493   58924 certs.go:120] copying: /var/lib/minikube/certs/apiserver.crt
I0424 08:50:25.711726   58924 kic_runner.go:91] Run: sudo podman inspect --format {{.Id}} k8s.gcr.io/kube-proxy:v1.18.0
I0424 08:50:25.737366   58924 kic_runner.go:91] Run: sudo podman inspect --format {{.Id}} k8s.gcr.io/kube-scheduler:v1.18.0
I0424 08:50:25.745435   58924 kic_runner.go:91] Run: sudo podman inspect --format {{.Id}} k8s.gcr.io/coredns:1.6.7
I0424 08:50:25.749033   58924 kic_runner.go:91] Run: sudo podman inspect --format {{.Id}} k8s.gcr.io/kube-controller-manager:v1.18.0
I0424 08:50:25.751632   58924 kic_runner.go:91] Run: sudo podman inspect --format {{.Id}} k8s.gcr.io/kube-apiserver:v1.18.0
I0424 08:50:25.761347   58924 kic_runner.go:91] Run: sudo podman inspect --format {{.Id}} k8s.gcr.io/etcd:3.4.3-0
I0424 08:50:25.783861   58924 kic_runner.go:91] Run: sudo podman inspect --format {{.Id}} k8s.gcr.io/pause:3.2
I0424 08:50:25.866586   58924 cache_images.go:104] "k8s.gcr.io/kube-apiserver:v1.18.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.18.0" does not exist at hash "74060cea7f70476f300d9f04fe2c3b3a2e84589e0579382a8df8c82161c3735c" in container runtime
I0424 08:50:25.866608   58924 cache_images.go:236] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0
I0424 08:50:25.866720   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.18.0 -> /var/lib/minikube/images/kube-apiserver_v1.18.0
I0424 08:50:25.878809   58924 cache_images.go:104] "k8s.gcr.io/kube-proxy:v1.18.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.18.0" does not exist at hash "43940c34f24f39bc9a00b4f9dbcab51a3b28952a7c392c119b877fcb48fe65a3" in container runtime
I0424 08:50:25.878831   58924 cache_images.go:236] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0
I0424 08:50:25.878852   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.18.0 -> /var/lib/minikube/images/kube-proxy_v1.18.0
I0424 08:50:25.901147   58924 cache_images.go:104] "k8s.gcr.io/kube-scheduler:v1.18.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.18.0" does not exist at hash "a31f78c7c8ce146a60cc178c528dd08ca89320f2883e7eb804d7f7b062ed6466" in container runtime
I0424 08:50:25.901290   58924 cache_images.go:236] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0
I0424 08:50:25.901341   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.18.0 -> /var/lib/minikube/images/kube-scheduler_v1.18.0
I0424 08:50:25.915506   58924 cache_images.go:104] "k8s.gcr.io/coredns:1.6.7" needs transfer: "k8s.gcr.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
I0424 08:50:25.915548   58924 cache_images.go:236] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7
I0424 08:50:25.915571   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7 -> /var/lib/minikube/images/coredns_1.6.7
I0424 08:50:25.953601   58924 cache_images.go:104] "k8s.gcr.io/kube-controller-manager:v1.18.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.18.0" does not exist at hash "d3e55153f52fb62421dae9ad1a8690a3fd1b30f1b808e50a69a8e7ed5565e72e" in container runtime
I0424 08:50:25.953766   58924 cache_images.go:236] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0
I0424 08:50:25.953793   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 -> /var/lib/minikube/images/kube-controller-manager_v1.18.0
I0424 08:50:25.980292   58924 cache_images.go:104] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
I0424 08:50:25.980403   58924 cache_images.go:236] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/pause_3.2
I0424 08:50:25.980462   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/pause_3.2 -> /var/lib/minikube/images/pause_3.2
I0424 08:50:26.024606   58924 cache_images.go:104] "k8s.gcr.io/etcd:3.4.3-0" needs transfer: "k8s.gcr.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
I0424 08:50:26.024658   58924 cache_images.go:236] Loading image from cache: /root/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0
I0424 08:50:26.024684   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 -> /var/lib/minikube/images/etcd_3.4.3-0
I0424 08:50:26.163080   58924 certs.go:120] copying: /var/lib/minikube/certs/apiserver.key
I0424 08:50:26.280540   58924 cache_images.go:104] "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" does not exist at hash "4689081edb103a9e8174bf23a255bfbe0b2d9ed82edc907abab6989d1c60f02c" in container runtime
I0424 08:50:26.280685   58924 cache_images.go:236] Loading image from cache: /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I0424 08:50:26.280743   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1 -> /var/lib/minikube/images/storage-provisioner_v1.8.1
I0424 08:50:26.633465   58924 crio.go:159] Loading image: /var/lib/minikube/images/coredns_1.6.7
I0424 08:50:26.633610   58924 kic_runner.go:91] Run: sudo podman load -i /var/lib/minikube/images/coredns_1.6.7
I0424 08:50:26.799781   58924 crio.go:159] Loading image: /var/lib/minikube/images/pause_3.2
I0424 08:50:26.799922   58924 kic_runner.go:91] Run: sudo podman load -i /var/lib/minikube/images/pause_3.2
I0424 08:50:26.874761   58924 certs.go:120] copying: /var/lib/minikube/certs/proxy-client.crt
I0424 08:50:27.049767   58924 crio.go:159] Loading image: /var/lib/minikube/images/kube-scheduler_v1.18.0
I0424 08:50:27.049910   58924 kic_runner.go:91] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.18.0
I0424 08:50:27.056429   58924 kic_runner.go:91] Run: sudo podman inspect --format {{.Id}} kubernetesui/metrics-scraper:v1.0.2
I0424 08:50:27.151166   58924 cache_images.go:104] "kubernetesui/metrics-scraper:v1.0.2" needs transfer: "kubernetesui/metrics-scraper:v1.0.2" does not exist at hash "3b08661dc379d9f80155be9d658f71578988640357ebae1aab287d6954c723d1" in container runtime
I0424 08:50:27.151188   58924 cache_images.go:236] Loading image from cache: /root/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2
I0424 08:50:27.151209   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2 -> /var/lib/minikube/images/metrics-scraper_v1.0.2
I0424 08:50:27.201326   58924 crio.go:159] Loading image: /var/lib/minikube/images/kube-apiserver_v1.18.0
I0424 08:50:27.201444   58924 kic_runner.go:91] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.18.0
I0424 08:50:27.233939   58924 certs.go:120] copying: /var/lib/minikube/certs/proxy-client.key
I0424 08:50:27.240430   58924 kic_runner.go:91] Run: sudo podman inspect --format {{.Id}} kubernetesui/dashboard:v2.0.0-rc6
I0424 08:50:27.346977   58924 crio.go:159] Loading image: /var/lib/minikube/images/kube-proxy_v1.18.0
I0424 08:50:27.347041   58924 kic_runner.go:91] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.18.0
I0424 08:50:27.429094   58924 crio.go:159] Loading image: /var/lib/minikube/images/storage-provisioner_v1.8.1
I0424 08:50:27.429243   58924 kic_runner.go:91] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v1.8.1
I0424 08:50:27.451347   58924 cache_images.go:104] "kubernetesui/dashboard:v2.0.0-rc6" needs transfer: "kubernetesui/dashboard:v2.0.0-rc6" does not exist at hash "cdc71b5a8a0eeb73b47a23d067d8345d8bea4932028fed34509db9a7266f2080" in container runtime
I0424 08:50:27.451370   58924 cache_images.go:236] Loading image from cache: /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-rc6
I0424 08:50:27.451387   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-rc6 -> /var/lib/minikube/images/dashboard_v2.0.0-rc6
I0424 08:50:27.532677   58924 crio.go:159] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.18.0
I0424 08:50:27.532732   58924 kic_runner.go:91] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.18.0
I0424 08:50:27.576069   58924 certs.go:120] copying: /var/lib/minikube/certs/ca.crt
I0424 08:50:28.090982   58924 certs.go:120] copying: /var/lib/minikube/certs/ca.key
I0424 08:50:28.384742   58924 certs.go:120] copying: /var/lib/minikube/certs/proxy-client-ca.crt
I0424 08:50:28.677969   58924 certs.go:120] copying: /var/lib/minikube/certs/proxy-client-ca.key
I0424 08:50:28.980496   58924 certs.go:120] copying: /usr/share/ca-certificates/minikubeCA.pem
I0424 08:50:29.278133   58924 certs.go:120] copying: /var/lib/minikube/kubeconfig
I0424 08:50:29.572595   58924 kic_runner.go:91] Run: openssl version
I0424 08:50:29.802115   58924 kic_runner.go:91] Run: sudo /bin/bash -c "test -f /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0424 08:50:30.245352   58924 kic_runner.go:91] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0424 08:50:30.634981   58924 certs.go:370] hashing: -rw-r--r--. 1 root root 1066 Apr 23 21:25 /usr/share/ca-certificates/minikubeCA.pem
I0424 08:50:30.635042   58924 kic_runner.go:91] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0424 08:50:31.051587   58924 kic_runner.go:91] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0424 08:50:33.789431   58924 kic_runner.go:118] Done: [podman exec --privileged minikube sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.18.0]: (6.25667014s)
I0424 08:50:33.789497   58924 cache_images.go:258] Transferred and loaded /root/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.18.0 from cache
I0424 08:50:33.789530   58924 crio.go:159] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
I0424 08:50:33.789664   58924 kic_runner.go:91] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.4.3-0
I0424 08:50:41.928140   58924 kic_runner.go:118] Done: [podman exec --privileged minikube sudo podman load -i /var/lib/minikube/images/etcd_3.4.3-0]: (8.138439932s)
I0424 08:50:41.928205   58924 cache_images.go:258] Transferred and loaded /root/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0 from cache
I0424 08:50:41.928251   58924 crio.go:159] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.2
I0424 08:50:41.928369   58924 kic_runner.go:91] Run: sudo podman load -i /var/lib/minikube/images/metrics-scraper_v1.0.2
I0424 08:50:44.296910   58924 kic_runner.go:118] Done: [podman exec --privileged minikube sudo podman load -i /var/lib/minikube/images/metrics-scraper_v1.0.2]: (2.368502669s)
I0424 08:50:44.296981   58924 cache_images.go:258] Transferred and loaded /root/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2 from cache
I0424 08:50:44.297010   58924 crio.go:159] Loading image: /var/lib/minikube/images/dashboard_v2.0.0-rc6
I0424 08:50:44.297127   58924 kic_runner.go:91] Run: sudo podman load -i /var/lib/minikube/images/dashboard_v2.0.0-rc6
I0424 08:50:50.745533   58924 kic_runner.go:118] Done: [podman exec --privileged minikube sudo podman load -i /var/lib/minikube/images/dashboard_v2.0.0-rc6]: (6.448383747s)
I0424 08:50:50.745560   58924 cache_images.go:258] Transferred and loaded /root/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-rc6 from cache
I0424 08:50:50.745574   58924 cache_images.go:77] LoadImages completed in 25.538981192s
❌  Unable to load cached images: loading cached images: CRI-O load /var/lib/minikube/images/coredns_1.6.7: crio load image: sudo podman load -i /var/lib/minikube/images/coredns_1.6.7: exit status 255
stdout:

stderr:
Error: can only create exec sessions on running containers: container state improper

I0424 08:50:50.745664   58924 kubeadm.go:125] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:10.88.0.6 APIServerPort:8443 KubernetesVersion:v1.18.0 EtcdDataDir:/var/lib/minikube/etcd ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.88.0.6"]]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:10.88.0.6 ControlPlaneAddress:10.88.0.6}
I0424 08:50:50.745736   58924 kubeadm.go:129] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.88.0.6
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 10.88.0.6
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "10.88.0.6"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: 10.88.0.6:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
metricsBindAddress: 10.88.0.6:10249

I0424 08:50:50.745801   58924 kic_runner.go:91] Run: crio config
I0424 08:50:51.094642   58924 kubeadm.go:671] kubelet [Unit]
Wants=crio.service

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --fail-swap-on=false --hostname-override=minikube --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=10.88.0.6 --pod-manifest-path=/etc/kubernetes/manifests --runtime-request-timeout=15m

[Install]
 config:
{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri-o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true NodeIP: NodePort:0 NodeName:}
I0424 08:50:51.094810   58924 kic_runner.go:91] Run: sudo ls /var/lib/minikube/binaries/v1.18.0
I0424 08:50:51.388136   58924 binaries.go:45] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.18.0: exit status 2
stdout:

stderr:
ls: cannot access '/var/lib/minikube/binaries/v1.18.0': No such file or directory
Error: exec session exited with non-zero exit code 2: OCI runtime error

Initiating transfer...
I0424 08:50:51.388293   58924 kic_runner.go:91] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.18.0
I0424 08:50:51.727983   58924 kic_runner.go:91] Run: /bin/bash -c "pgrep kubelet && sudo systemctl stop kubelet"
W0424 08:50:52.040177   58924 binaries.go:55] unable to stop kubelet: /bin/bash -c "pgrep kubelet && sudo systemctl stop kubelet": exit status 1
stdout:

stderr:
Error: exec session exited with non-zero exit code 1: OCI runtime error
I0424 08:50:52.040320   58924 binary.go:57] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubectl.sha256
I0424 08:50:52.040338   58924 binary.go:57] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubelet?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubelet.sha256
I0424 08:50:52.040339   58924 binary.go:57] Not caching binary, using https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubeadm?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubeadm.sha256
I0424 08:50:52.040378   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/linux/v1.18.0/kubelet -> /var/lib/minikube/binaries/v1.18.0/kubelet
I0424 08:50:52.040347   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/linux/v1.18.0/kubectl -> /var/lib/minikube/binaries/v1.18.0/kubectl
I0424 08:50:52.040400   58924 vm_assets.go:90] NewFileAsset: /root/.minikube/cache/linux/v1.18.0/kubeadm -> /var/lib/minikube/binaries/v1.18.0/kubeadm
I0424 08:50:52.844472   58924 kic_runner.go:91] Run: sudo mkdir -p /var/tmp/minikube /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/cni/net.d
I0424 08:50:54.543369   58924 kic_runner.go:91] Run: /bin/bash -c "pgrep kubelet && diff -u /lib/systemd/system/kubelet.service /lib/systemd/system/kubelet.service.new && diff -u /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new"
I0424 08:50:54.851868   58924 kic_runner.go:91] Run: /bin/bash -c "sudo cp /lib/systemd/system/kubelet.service.new /lib/systemd/system/kubelet.service && sudo cp /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.new /etc/systemd/system/kubelet.service.d/10-kubeadm.conf && sudo systemctl daemon-reload && sudo systemctl restart kubelet"
I0424 08:50:55.244672   58924 kubeadm.go:278] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:2200 CPUs:2 DiskSize:20000 Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri-o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP:10.88.0.6 Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0424 08:50:55.244828   58924 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0424 08:50:55.244954   58924 kic_runner.go:91] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0424 08:50:55.544358   58924 cri.go:76] found id: ""
I0424 08:50:55.544509   58924 kic_runner.go:91] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0424 08:50:55.857173   58924 kic_runner.go:91] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0424 08:50:56.133932   58924 kubeadm.go:214] ignoring SystemVerification for kubeadm because of either driver or kubernetes version
I0424 08:50:56.134072   58924 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://10.88.0.6:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf"
I0424 08:50:56.466400   58924 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://10.88.0.6:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf"
I0424 08:50:56.762083   58924 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://10.88.0.6:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf"
I0424 08:50:57.062750   58924 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://10.88.0.6:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf"
I0424 08:50:57.403902   58924 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
💥  initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.88.0.6 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.88.0.6 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
        - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'

stderr:
W0424 15:50:57.578582     657 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0424 15:51:24.238394     657 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0424 15:51:24.239321     657 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Error: exec session exited with non-zero exit code 1: OCI runtime error

I0424 08:53:19.532671   58924 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm reset --force"
I0424 08:53:25.133914   58924 kic_runner.go:118] Done: [podman exec --privileged minikube /bin/bash -c sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm reset --force]: (5.601223689s)
I0424 08:53:25.133946   58924 kubelet.go:43] stopping kubelet ...
I0424 08:53:25.133998   58924 kic_runner.go:91] Run: sudo systemctl stop -f kubelet.service
I0424 08:53:25.393960   58924 kic_runner.go:91] Run: sudo systemctl show -p SubState kubelet
I0424 08:53:25.714770   58924 cri.go:41] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
I0424 08:53:25.714921   58924 kic_runner.go:91] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0424 08:53:26.019052   58924 cri.go:76] found id: ""
I0424 08:53:26.019126   58924 kubeadm.go:214] ignoring SystemVerification for kubeadm because of either driver or kubernetes version
I0424 08:53:26.019281   58924 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://10.88.0.6:8443 /etc/kubernetes/admin.conf || sudo rm -f /etc/kubernetes/admin.conf"
I0424 08:53:26.352987   58924 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://10.88.0.6:8443 /etc/kubernetes/kubelet.conf || sudo rm -f /etc/kubernetes/kubelet.conf"
I0424 08:53:26.676792   58924 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://10.88.0.6:8443 /etc/kubernetes/controller-manager.conf || sudo rm -f /etc/kubernetes/controller-manager.conf"
I0424 08:53:26.986816   58924 kic_runner.go:91] Run: sudo /bin/bash -c "grep https://10.88.0.6:8443 /etc/kubernetes/scheduler.conf || sudo rm -f /etc/kubernetes/scheduler.conf"
I0424 08:53:27.290491   58924 kic_runner.go:91] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"

I0424 08:55:23.700706   58924 kubeadm.go:280] StartCluster complete in 4m28.456047386s
I0424 08:55:23.700805   58924 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0424 08:55:23.700857   58924 kic_runner.go:91] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0424 08:55:23.980170   58924 cri.go:76] found id: ""
I0424 08:55:23.980224   58924 logs.go:203] 0 containers: []
W0424 08:55:23.980252   58924 logs.go:205] No container was found matching "kube-apiserver"
I0424 08:55:23.980276   58924 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0424 08:55:23.980563   58924 kic_runner.go:91] Run: sudo crictl ps -a --quiet --name=etcd
I0424 08:55:24.276333   58924 cri.go:76] found id: ""
I0424 08:55:24.276385   58924 logs.go:203] 0 containers: []
W0424 08:55:24.276407   58924 logs.go:205] No container was found matching "etcd"
I0424 08:55:24.276434   58924 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0424 08:55:24.276831   58924 kic_runner.go:91] Run: sudo crictl ps -a --quiet --name=coredns
I0424 08:55:24.589982   58924 cri.go:76] found id: ""
I0424 08:55:24.590032   58924 logs.go:203] 0 containers: []
W0424 08:55:24.590078   58924 logs.go:205] No container was found matching "coredns"
I0424 08:55:24.590095   58924 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0424 08:55:24.590217   58924 kic_runner.go:91] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0424 08:55:24.899673   58924 cri.go:76] found id: ""
I0424 08:55:24.899723   58924 logs.go:203] 0 containers: []
W0424 08:55:24.899751   58924 logs.go:205] No container was found matching "kube-scheduler"
I0424 08:55:24.899794   58924 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0424 08:55:24.899931   58924 kic_runner.go:91] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0424 08:55:25.206693   58924 cri.go:76] found id: ""
I0424 08:55:25.206939   58924 logs.go:203] 0 containers: []
W0424 08:55:25.206978   58924 logs.go:205] No container was found matching "kube-proxy"
I0424 08:55:25.207005   58924 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I0424 08:55:25.207156   58924 kic_runner.go:91] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0424 08:55:25.523832   58924 cri.go:76] found id: ""
I0424 08:55:25.523880   58924 logs.go:203] 0 containers: []
W0424 08:55:25.523923   58924 logs.go:205] No container was found matching "kubernetes-dashboard"
I0424 08:55:25.523946   58924 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I0424 08:55:25.524078   58924 kic_runner.go:91] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0424 08:55:25.874154   58924 cri.go:76] found id: ""
I0424 08:55:25.874196   58924 logs.go:203] 0 containers: []
W0424 08:55:25.874246   58924 logs.go:205] No container was found matching "storage-provisioner"
I0424 08:55:25.874275   58924 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0424 08:55:25.874406   58924 kic_runner.go:91] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0424 08:55:26.213256   58924 cri.go:76] found id: ""
I0424 08:55:26.213303   58924 logs.go:203] 0 containers: []
W0424 08:55:26.213366   58924 logs.go:205] No container was found matching "kube-controller-manager"
I0424 08:55:26.213404   58924 logs.go:117] Gathering logs for kubelet ...
I0424 08:55:26.213553   58924 kic_runner.go:91] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0424 08:55:26.579354   58924 logs.go:117] Gathering logs for dmesg ...
I0424 08:55:26.579439   58924 kic_runner.go:91] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0424 08:55:26.855586   58924 logs.go:117] Gathering logs for describe nodes ...
I0424 08:55:26.855700   58924 kic_runner.go:91] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
W0424 08:55:27.214900   58924 logs.go:124] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": exit status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
Error: exec session exited with non-zero exit code 1: OCI runtime error
 output:
** stderr **
The connection to the server localhost:8443 was refused - did you specify the right host or port?
Error: exec session exited with non-zero exit code 1: OCI runtime error

** /stderr **
I0424 08:55:27.214958   58924 logs.go:117] Gathering logs for CRI-O ...
I0424 08:55:27.215055   58924 kic_runner.go:91] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I0424 08:55:27.529125   58924 logs.go:117] Gathering logs for container status ...
I0424 08:55:27.529233   58924 kic_runner.go:91] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
W0424 08:55:27.834793   58924 exit.go:101] Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
        - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'

stderr:
W0424 15:53:27.524037    1867 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0424 15:53:28.402482    1867 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0424 15:53:28.403422    1867 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Error: exec session exited with non-zero exit code 1: OCI runtime error

💣  Error starting cluster: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.18.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": exit status 1
stdout:
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
        - 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'

stderr:
W0424 15:53:27.524037    1867 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
W0424 15:53:28.402482    1867 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
W0424 15:53:28.403422    1867 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
Error: exec session exited with non-zero exit code 1: OCI runtime error

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose
# podman ps
CONTAINER ID  IMAGE                               COMMAND  CREATED        STATUS            PORTS                                                                          NAMES
df3a18a84299  gcr.io/k8s-minikube/kicbase:v0.0.8           4 minutes ago  Up 4 minutes ago  127.0.0.1:44027->22/tcp, 127.0.0.1:43031->2376/tcp, 127.0.0.1:39825->8443/tcp  minikube

[root@cube0 ~]# kubectl get nodes
error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

docker@minikube:~$ ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
root           1       0  0 15:50 ?        00:00:00 /sbin/init
root          32       1  0 15:50 ?        00:00:00 /lib/systemd/systemd-journald
root          45       1  0 15:50 ?        00:00:00 /usr/sbin/sshd -D
root         189       1  2 15:50 ?        00:00:11 /usr/bin/crio
root        2493      45  0 15:55 ?        00:00:00 sshd: docker [priv]
docker      2495    2493  0 15:55 ?        00:00:00 sshd: docker@pts/1
docker      2496    2495  0 15:55 pts/1    00:00:00 -bash
root        3101       1  2 15:57 ?        00:00:00 /var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-c
docker      3117    2496  0 15:57 pts/1    00:00:00 ps -ef
mazzystr commented 4 years ago

re --driver=none ... Ah, Docker isn't installed! I am now able to get --driver=none to work!

# minikube start --container-runtime=cri-o --network-plugin=cni --enable-default-cni --driver=none --alsologtostderr --v=5
...
blah blah blah
...
🏄  Done! kubectl is now configured to use "minikube"
I0424 10:56:43.330388    8668 start.go:454] kubectl: 1.18.1, cluster: 1.18.0 (minor skew: 0)

Docker has a write up on Fedora installation and falling back to cgroups v1 here. This is critical to getting minikube to come up. Can we link to that?? And how exactly is Docker being used here? Everything appears to be running under cri-o according to docker ps crictl ps.

Nevertheless this is a big step forward. I'm now able to deploy much larger virtual machines via the KubeVirt interface.

afbjorklund commented 4 years ago

Fedora knows pretty well that it is not being compatible with Docker or Kubernetes anymore... https://fedoraproject.org/wiki/Changes/CGroupsV2

They want to lead the way to cgroups v2: https://medium.com/nttlabs/cgroup-v2-596d035be4d7 Even though it's not supported in Kubernetes yet.

https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/20191118-cgroups-v2.md

afbjorklund commented 4 years ago

And how exactly is Docker being used here?

This is a bug in the minikube registry, unfortunately: #5549

It's not used at all, so any "docker" binary will do (like true) We have found that the "podman-docker" package can segfault.

Hopefully it can be fixed, so that this workaround is not needed.

afbjorklund commented 4 years ago

There is a new version available now, works OK on Fedora 32 - see https://github.com/kubernetes/minikube/issues/7885#issuecomment-619363657

The code is in: PR #7631 - but you can also use the regular docker driver instead

You do this by installing "moby-engine" for docker, rather than "podman-docker"

In that case, you also need to start the systemd service and add the docker group..

medyagh commented 4 years ago

@mazzystr do u mind trying with latest version of minikube? we did a lot of improvements on podman driver

mazzystr commented 4 years ago

Success!

$ sudo -k -n podman version --format {{.Version}}
1.8.2
[crc@cube0 ~]$ minikube start --driver=podman --container-runtime=cri-o
😄  minikube v1.10.1 on Fedora 32
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating podman container (CPUs=2, Memory=3900MB) ...
🎁  Preparing Kubernetes v1.18.2 on CRI-O 1.17.3 ...
    > kubelet.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubectl.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubeadm.sha256: 65 B / 65 B [--------------------------] 100.00% ? p/s 0s
    > kubeadm: 37.97 MiB / 37.97 MiB [---------------] 100.00% 73.55 MiB p/s 1s
    > kubectl: 41.99 MiB / 41.99 MiB [---------------] 100.00% 27.78 MiB p/s 2s
    > kubelet: 108.03 MiB / 108.03 MiB [-------------] 100.00% 60.02 MiB p/s 2s
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

$ kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   2m23s   v1.18.2
$ kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml
namespace/kubevirt created
customresourcedefinition.apiextensions.k8s.io/kubevirts.kubevirt.io created
priorityclass.scheduling.k8s.io/kubevirt-cluster-critical created
clusterrole.rbac.authorization.k8s.io/kubevirt.io:operator created
serviceaccount/kubevirt-operator created
role.rbac.authorization.k8s.io/kubevirt-operator created
rolebinding.rbac.authorization.k8s.io/kubevirt-operator-rolebinding created
clusterrole.rbac.authorization.k8s.io/kubevirt-operator created
clusterrolebinding.rbac.authorization.k8s.io/kubevirt-operator created
deployment.apps/virt-operator created

$ kubectl get pods -n kubevirt
NAME                           READY   STATUS    RESTARTS   AGE
virt-operator-57c84c94-8wncw   1/1     Running   0          110s
virt-operator-57c84c94-nzklb   1/1     Running   0          110s