Closed esentai8 closed 3 years ago
Interesting - I have not seen this before. Could you include the output of:
minikube start --alsologtostderr -v=1
Hello Thomas, thank you for responding to my request. Below is the output of the command you suggested
`sudo minikube start --alsologtostderr -v=1
I1212 14:09:45.331492 15696 out.go:185] Setting OutFile to fd 1 ...
I1212 14:09:45.331717 15696 out.go:237] isatty.IsTerminal(1) = true
I1212 14:09:45.331726 15696 out.go:198] Setting ErrFile to fd 2...
I1212 14:09:45.331733 15696 out.go:237] isatty.IsTerminal(2) = true
I1212 14:09:45.331822 15696 root.go:279] Updating PATH: /root/.minikube/bin
W1212 14:09:45.331922 15696 root.go:254] Error reading config file at /root/.minikube/config/config.json: open /root/.minikube/config/config.json: no such file or directory
I1212 14:09:45.332129 15696 out.go:192] Setting JSON to false
I1212 14:09:45.351630 15696 start.go:103] hostinfo: {"hostname":"ubuntu-XPS-13-9350","uptime":6114,"bootTime":1607794071,"procs":382,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-56-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"4c4c4544-0052-3810-804b-b2c04f484332"}
I1212 14:09:45.352184 15696 start.go:113] virtualization: kvm host
I1212 14:09:45.364448 15696 out.go:110] ๐ minikube v1.15.1 on Ubuntu 20.04
๐ minikube v1.15.1 on Ubuntu 20.04
I1212 14:09:45.364695 15696 notify.go:126] Checking for updates...
I1212 14:09:45.364775 15696 driver.go:302] Setting default libvirt URI to qemu:///system
I1212 14:09:45.364837 15696 global.go:102] Querying for installed drivers using PATH=/root/.minikube/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
I1212 14:09:45.364916 15696 global.go:110] kvm2 priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
I1212 14:09:45.365017 15696 global.go:110] none priority: 3, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:
W1212 14:09:45.560009 15696 out.go:146] โ Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges. โ Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges. I1212 14:09:45.570729 15696 out.go:110]
`
@tstromberg I have the same issue, when I run minikube start --alsologtostderr -v=1
I get this:
I1218 16:45:36.478340 108597 out.go:221] Setting OutFile to fd 1 ...
I1218 16:45:36.478619 108597 out.go:273] isatty.IsTerminal(1) = true
I1218 16:45:36.478628 108597 out.go:234] Setting ErrFile to fd 2...
I1218 16:45:36.478636 108597 out.go:273] isatty.IsTerminal(2) = true
I1218 16:45:36.478721 108597 root.go:280] Updating PATH: /home/jesper/.minikube/bin
I1218 16:45:36.478935 108597 out.go:228] Setting JSON to false
I1218 16:45:36.497040 108597 start.go:104] hostinfo: {"hostname":"jesper-XPS-13-9370","uptime":9029,"bootTime":1608297307,"procs":352,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-58-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"945a23ed-32ff-4159-beb6-3a48ef730dc2"}
I1218 16:45:36.497529 108597 start.go:114] virtualization: kvm host
I1218 16:45:36.508906 108597 out.go:119] ๐ minikube v1.16.0 on Ubuntu 20.04
๐ minikube v1.16.0 on Ubuntu 20.04
I1218 16:45:36.509030 108597 driver.go:303] Setting default libvirt URI to qemu:///system
I1218 16:45:36.509034 108597 notify.go:126] Checking for updates...
I1218 16:45:36.509064 108597 global.go:102] Querying for installed drivers using PATH=/home/jesper/.minikube/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
I1218 16:45:36.509107 108597 global.go:110] vmware priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I1218 16:45:36.582121 108597 docker.go:117] docker version: linux-19.03.11
I1218 16:45:36.582212 108597 cli_runner.go:111] Run: docker system info --format "{{json .}}"
I1218 16:45:36.656809 108597 info.go:253] docker info: {ID:36CE:MTYI:5KCZ:USNE:V46F:DJZI:K3DZ:RMAM:OJLE:3IYN:EVS5:ID7D Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:41 SystemTime:2020-12-18 16:45:36.645724373 +0100 CET LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-58-generic OperatingSystem:Ubuntu Core 16 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:16493830144 GenericResources:<nil> DockerRootDir:/var/snap/docker/common/var-lib-docker HTTPProxy: HTTPSProxy: NoProxy: Name:jesper-XPS-13-9370 Labels:[] ExperimentalBuild:false ServerVersion:19.03.11 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID: Expected:} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1218 16:45:36.656885 108597 docker.go:147] overlay module found
I1218 16:45:36.656894 108597 global.go:110] docker priority: 8, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I1218 16:45:36.656934 108597 global.go:110] kvm2 priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
I1218 16:45:36.660281 108597 global.go:110] none priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:running the 'none' driver as a regular user requires sudo permissions Fix: Doc:}
I1218 16:45:36.660324 108597 global.go:110] podman priority: 2, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/}
I1218 16:45:36.660372 108597 global.go:110] virtualbox priority: 5, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
I1218 16:45:36.660392 108597 driver.go:249] "docker" has a higher priority (8) than "" (0)
I1218 16:45:36.660414 108597 driver.go:274] Picked: docker
I1218 16:45:36.660435 108597 driver.go:275] Alternatives: []
I1218 16:45:36.660445 108597 driver.go:276] Rejects: [vmware kvm2 none podman virtualbox]
I1218 16:45:36.666785 108597 out.go:119] โจ Automatically selected the docker driver
โจ Automatically selected the docker driver
I1218 16:45:36.666816 108597 start.go:277] selected driver: docker
I1218 16:45:36.666825 108597 start.go:686] validating driver "docker" against <nil>
I1218 16:45:36.666842 108597 start.go:697] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I1218 16:45:36.666919 108597 cli_runner.go:111] Run: docker system info --format "{{json .}}"
I1218 16:45:36.740166 108597 info.go:253] docker info: {ID:36CE:MTYI:5KCZ:USNE:V46F:DJZI:K3DZ:RMAM:OJLE:3IYN:EVS5:ID7D Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:41 SystemTime:2020-12-18 16:45:36.729773498 +0100 CET LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-58-generic OperatingSystem:Ubuntu Core 16 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:16493830144 GenericResources:<nil> DockerRootDir:/var/snap/docker/common/var-lib-docker HTTPProxy: HTTPSProxy: NoProxy: Name:jesper-XPS-13-9370 Labels:[] ExperimentalBuild:false ServerVersion:19.03.11 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID: Expected:} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1218 16:45:36.740289 108597 start_flags.go:235] no existing cluster config was found, will generate one from the flags
I1218 16:45:36.740832 108597 start_flags.go:253] Using suggested 3900MB memory alloc based on sys=15729MB, container=15729MB
I1218 16:45:36.740936 108597 start_flags.go:648] Wait components to verify : map[apiserver:true system_pods:true]
I1218 16:45:36.740955 108597 cni.go:74] Creating CNI manager for ""
I1218 16:45:36.740962 108597 cni.go:139] CNI unnecessary in this configuration, recommending no CNI
I1218 16:45:36.740971 108597 start_flags.go:367] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false}
I1218 16:45:36.750883 108597 out.go:119] ๐ Starting control plane node minikube in cluster minikube
๐ Starting control plane node minikube in cluster minikube
I1218 16:45:36.846402 108597 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 in local docker daemon, skipping pull
I1218 16:45:36.846423 108597 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 exists in daemon, skipping pull
I1218 16:45:36.846440 108597 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1218 16:45:36.846470 108597 preload.go:105] Found local preload: /home/jesper/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4
I1218 16:45:36.846478 108597 cache.go:54] Caching tarball of preloaded images
I1218 16:45:36.846493 108597 preload.go:131] Found /home/jesper/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1218 16:45:36.846500 108597 cache.go:57] Finished verifying existence of preloaded tar for v1.20.0 on docker
I1218 16:45:36.846736 108597 profile.go:147] Saving config to /home/jesper/.minikube/profiles/minikube/config.json ...
I1218 16:45:36.846756 108597 lock.go:36] WriteFile acquiring /home/jesper/.minikube/profiles/minikube/config.json: {Name:mk0c1b80bff2a3b7cd60451ee57b5e84f05c0781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1218 16:45:36.846912 108597 cache.go:185] Successfully downloaded all kic artifacts
I1218 16:45:36.846934 108597 start.go:314] acquiring machines lock for minikube: {Name:mkb862b1273fae7cf6a3a7abc1e69f9cd73a9445 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1218 16:45:36.846974 108597 start.go:318] acquired machines lock for "minikube" in 28.665ยตs
I1218 16:45:36.846988 108597 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}
I1218 16:45:36.847036 108597 start.go:127] createHost starting for "" (driver="docker")
I1218 16:45:36.853175 108597 out.go:119] ๐ฅ Creating docker container (CPUs=2, Memory=3900MB) ...
๐ฅ Creating docker container (CPUs=2, Memory=3900MB) ...
I1218 16:45:36.853322 108597 start.go:164] libmachine.API.Create for "minikube" (driver="docker")
I1218 16:45:36.853341 108597 client.go:165] LocalClient.Create starting
I1218 16:45:36.853375 108597 main.go:119] libmachine: Reading certificate data from /home/jesper/.minikube/certs/ca.pem
I1218 16:45:36.853402 108597 main.go:119] libmachine: Decoding PEM data...
I1218 16:45:36.853420 108597 main.go:119] libmachine: Parsing certificate...
I1218 16:45:36.853513 108597 main.go:119] libmachine: Reading certificate data from /home/jesper/.minikube/certs/cert.pem
I1218 16:45:36.853533 108597 main.go:119] libmachine: Decoding PEM data...
I1218 16:45:36.853546 108597 main.go:119] libmachine: Parsing certificate...
I1218 16:45:36.853808 108597 cli_runner.go:111] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{(index .Options "com.docker.network.driver.mtu")}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}"
W1218 16:45:36.918695 108597 cli_runner.go:149] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{(index .Options "com.docker.network.driver.mtu")}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" returned with exit code 1
I1218 16:45:36.918909 108597 network_create.go:235] running [docker network inspect minikube] to gather additional debugging logs...
I1218 16:45:36.918934 108597 cli_runner.go:111] Run: docker network inspect minikube
W1218 16:45:36.983571 108597 cli_runner.go:149] docker network inspect minikube returned with exit code 1
I1218 16:45:36.983599 108597 network_create.go:238] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]
stderr:
Error: No such network: minikube
I1218 16:45:36.983614 108597 network_create.go:240] output of [docker network inspect minikube]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: minikube
** /stderr **
I1218 16:45:36.983673 108597 cli_runner.go:111] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{(index .Options "com.docker.network.driver.mtu")}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}"
I1218 16:45:37.049303 108597 network_create.go:100] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ...
I1218 16:45:37.049388 108597 cli_runner.go:111] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I1218 16:45:37.166253 108597 kic.go:96] calculated static IP "192.168.49.2" for the "minikube" container
I1218 16:45:37.166402 108597 cli_runner.go:111] Run: docker ps -a --format {{.Names}}
I1218 16:45:37.232696 108597 cli_runner.go:111] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I1218 16:45:37.302322 108597 oci.go:102] Successfully created a docker volume minikube
I1218 16:45:37.302393 108597 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 -d /var/lib
I1218 16:45:38.094852 108597 oci.go:106] Successfully prepared a docker volume minikube
W1218 16:45:38.094997 108597 oci.go:159] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1218 16:45:38.095004 108597 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1218 16:45:38.095165 108597 preload.go:105] Found local preload: /home/jesper/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4
I1218 16:45:38.095210 108597 kic.go:159] Starting extracting preloaded images to volume ...
I1218 16:45:38.095265 108597 cli_runner.go:111] Run: docker info --format "'{{json .SecurityOptions}}'"
I1218 16:45:38.095523 108597 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jesper/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 -I lz4 -xf /preloaded.tar -C /extractDir
I1218 16:45:38.203912 108597 cli_runner.go:111] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=3900mb --memory-swap=3900mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16
I1218 16:45:38.932135 108597 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Running}}
I1218 16:45:39.007858 108597 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I1218 16:45:39.078715 108597 cli_runner.go:111] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I1218 16:45:39.277806 108597 oci.go:246] the created container "minikube" has a running status.
I1218 16:45:39.277833 108597 kic.go:190] Creating ssh key for kic: /home/jesper/.minikube/machines/minikube/id_rsa...
I1218 16:45:39.993851 108597 kic_runner.go:187] docker (temp): /home/jesper/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1218 16:45:40.064599 108597 client.go:168] LocalClient.Create took 3.211245702s
I1218 16:45:41.775623 108597 cli_runner.go:155] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jesper/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 -I lz4 -xf /preloaded.tar -C /extractDir: (3.679979943s)
I1218 16:45:41.775692 108597 kic.go:168] duration metric: took 3.680481 seconds to extract preloaded images to volume
I1218 16:45:42.065144 108597 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1218 16:45:42.065364 108597 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1218 16:45:42.141781 108597 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:32883 SSHKeyPath:/home/jesper/.minikube/machines/minikube/id_rsa Username:docker}
W1218 16:45:42.172959 108597 sshutil.go:59] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I1218 16:45:42.173005 108597 retry.go:31] will retry after 276.165072ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
W1218 16:45:42.516818 108597 sshutil.go:59] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I1218 16:45:42.516895 108597 retry.go:31] will retry after 540.190908ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
W1218 16:45:43.096182 108597 sshutil.go:59] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I1218 16:45:43.096272 108597 retry.go:31] will retry after 655.06503ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
W1218 16:45:43.790243 108597 sshutil.go:59] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I1218 16:45:43.790342 108597 retry.go:31] will retry after 791.196345ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
W1218 16:45:44.619215 108597 sshutil.go:59] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
W1218 16:45:44.619311 108597 start.go:258] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
W1218 16:45:44.619351 108597 start.go:240] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I1218 16:45:44.619387 108597 start.go:130] duration metric: createHost completed in 7.772338098s
I1218 16:45:44.619402 108597 start.go:81] releasing machines lock for "minikube", held for 7.772417932s
W1218 16:45:44.619439 108597 start.go:377] error starting host: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset145097697 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset145097697: no such file or directory
: exit status 1
I1218 16:45:44.620167 108597 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I1218 16:45:44.688844 108597 stop.go:39] StopHost: minikube
W1218 16:45:44.689105 108597 register.go:123] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I1218 16:45:44.706397 108597 out.go:119] โ Stopping node "minikube" ...
โ Stopping node "minikube" ...
I1218 16:45:44.706524 108597 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
W1218 16:45:44.771227 108597 register.go:123] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I1218 16:45:44.780234 108597 out.go:119] ๐ Powering off "minikube" via SSH ...
๐ Powering off "minikube" via SSH ...
I1218 16:45:44.780288 108597 cli_runner.go:111] Run: docker exec --privileged -t minikube /bin/bash -c "sudo init 0"
I1218 16:45:45.958786 108597 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I1218 16:45:46.037377 108597 oci.go:608] container minikube status is Stopped
I1218 16:45:46.037399 108597 oci.go:620] Successfully shutdown container minikube
I1218 16:45:46.037410 108597 stop.go:88] shutdown container: err=<nil>
I1218 16:45:46.037438 108597 main.go:119] libmachine: Stopping "minikube"...
I1218 16:45:46.037505 108597 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I1218 16:45:46.102947 108597 stop.go:59] stop err: Machine "minikube" is already stopped.
I1218 16:45:46.102970 108597 stop.go:62] host is already stopped
W1218 16:45:47.103708 108597 register.go:123] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I1218 16:45:47.118229 108597 out.go:119] ๐ฅ Deleting "minikube" in docker ...
๐ฅ Deleting "minikube" in docker ...
I1218 16:45:47.118479 108597 cli_runner.go:111] Run: docker container inspect -f {{.Id}} minikube
I1218 16:45:47.187952 108597 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I1218 16:45:47.255706 108597 cli_runner.go:111] Run: docker exec --privileged -t minikube /bin/bash -c "sudo init 0"
W1218 16:45:47.324960 108597 cli_runner.go:149] docker exec --privileged -t minikube /bin/bash -c "sudo init 0" returned with exit code 1
I1218 16:45:47.325000 108597 oci.go:600] error shutdown minikube: docker exec --privileged -t minikube /bin/bash -c "sudo init 0": exit status 1
stdout:
stderr:
Error response from daemon: Container 9be84ca367664198cef3eddcd83275fd986ae3c6f8762af71f1b121ae1bf2da6 is not running
I1218 16:45:48.325348 108597 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I1218 16:45:48.405719 108597 oci.go:608] container minikube status is Stopped
I1218 16:45:48.405740 108597 oci.go:620] Successfully shutdown container minikube
I1218 16:45:48.405783 108597 cli_runner.go:111] Run: docker rm -f -v minikube
I1218 16:45:48.490651 108597 cli_runner.go:111] Run: docker container inspect -f {{.Id}} minikube
W1218 16:45:48.555151 108597 cli_runner.go:149] docker container inspect -f {{.Id}} minikube returned with exit code 1
I1218 16:45:48.555225 108597 cli_runner.go:111] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{(index .Options "com.docker.network.driver.mtu")}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}"
I1218 16:45:48.619897 108597 cli_runner.go:111] Run: docker network rm minikube
W1218 16:45:48.836626 108597 out.go:181] ๐คฆ StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset145097697 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset145097697: no such file or directory
: exit status 1
๐คฆ StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset145097697 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset145097697: no such file or directory
: exit status 1
I1218 16:45:48.836653 108597 start.go:392] Will try again in 5 seconds ...
I1218 16:45:53.837128 108597 start.go:314] acquiring machines lock for minikube: {Name:mkb862b1273fae7cf6a3a7abc1e69f9cd73a9445 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1218 16:45:53.837421 108597 start.go:318] acquired machines lock for "minikube" in 198.767ยตs
I1218 16:45:53.837502 108597 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}
I1218 16:45:53.837664 108597 start.go:127] createHost starting for "" (driver="docker")
I1218 16:45:53.852035 108597 out.go:119] ๐ฅ Creating docker container (CPUs=2, Memory=3900MB) ...
๐ฅ Creating docker container (CPUs=2, Memory=3900MB) ...
I1218 16:45:53.852282 108597 start.go:164] libmachine.API.Create for "minikube" (driver="docker")
I1218 16:45:53.852369 108597 client.go:165] LocalClient.Create starting
I1218 16:45:53.852464 108597 main.go:119] libmachine: Reading certificate data from /home/jesper/.minikube/certs/ca.pem
I1218 16:45:53.852566 108597 main.go:119] libmachine: Decoding PEM data...
I1218 16:45:53.852620 108597 main.go:119] libmachine: Parsing certificate...
I1218 16:45:53.852900 108597 main.go:119] libmachine: Reading certificate data from /home/jesper/.minikube/certs/cert.pem
I1218 16:45:53.852971 108597 main.go:119] libmachine: Decoding PEM data...
I1218 16:45:53.853016 108597 main.go:119] libmachine: Parsing certificate...
I1218 16:45:53.853488 108597 cli_runner.go:111] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{(index .Options "com.docker.network.driver.mtu")}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}"
W1218 16:45:53.924029 108597 cli_runner.go:149] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{(index .Options "com.docker.network.driver.mtu")}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" returned with exit code 1
I1218 16:45:53.924096 108597 network_create.go:235] running [docker network inspect minikube] to gather additional debugging logs...
I1218 16:45:53.924117 108597 cli_runner.go:111] Run: docker network inspect minikube
W1218 16:45:53.987597 108597 cli_runner.go:149] docker network inspect minikube returned with exit code 1
I1218 16:45:53.987623 108597 network_create.go:238] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]
stderr:
Error: No such network: minikube
I1218 16:45:53.987644 108597 network_create.go:240] output of [docker network inspect minikube]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: minikube
** /stderr **
I1218 16:45:53.987711 108597 cli_runner.go:111] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{(index .Options "com.docker.network.driver.mtu")}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}"
I1218 16:45:54.052154 108597 network_create.go:100] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ...
I1218 16:45:54.052232 108597 cli_runner.go:111] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I1218 16:45:54.159187 108597 kic.go:96] calculated static IP "192.168.49.2" for the "minikube" container
I1218 16:45:54.159294 108597 cli_runner.go:111] Run: docker ps -a --format {{.Names}}
I1218 16:45:54.223305 108597 cli_runner.go:111] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I1218 16:45:54.288131 108597 oci.go:102] Successfully created a docker volume minikube
I1218 16:45:54.288195 108597 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 -d /var/lib
I1218 16:45:55.079988 108597 oci.go:106] Successfully prepared a docker volume minikube
W1218 16:45:55.080128 108597 oci.go:159] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1218 16:45:55.080135 108597 preload.go:97] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1218 16:45:55.080253 108597 preload.go:105] Found local preload: /home/jesper/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4
I1218 16:45:55.080277 108597 kic.go:159] Starting extracting preloaded images to volume ...
I1218 16:45:55.080294 108597 cli_runner.go:111] Run: docker info --format "'{{json .SecurityOptions}}'"
I1218 16:45:55.080404 108597 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jesper/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 -I lz4 -xf /preloaded.tar -C /extractDir
I1218 16:45:55.193462 108597 cli_runner.go:111] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=3900mb --memory-swap=3900mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16
I1218 16:45:55.822048 108597 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Running}}
I1218 16:45:55.898299 108597 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I1218 16:45:55.972481 108597 cli_runner.go:111] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I1218 16:45:56.189185 108597 oci.go:246] the created container "minikube" has a running status.
I1218 16:45:56.189218 108597 kic.go:190] Creating ssh key for kic: /home/jesper/.minikube/machines/minikube/id_rsa...
I1218 16:45:56.495117 108597 kic_runner.go:187] docker (temp): /home/jesper/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1218 16:45:56.614693 108597 client.go:168] LocalClient.Create took 2.762288667s
I1218 16:45:58.614928 108597 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1218 16:45:58.614990 108597 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1218 16:45:58.689116 108597 sshutil.go:48] new ssh client: &{IP:127.0.0.1 Port:32887 SSHKeyPath:/home/jesper/.minikube/machines/minikube/id_rsa Username:docker}
W1218 16:45:58.720064 108597 sshutil.go:59] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I1218 16:45:58.720108 108597 retry.go:31] will retry after 231.159374ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I1218 16:45:58.834905 108597 cli_runner.go:155] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jesper/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 -I lz4 -xf /preloaded.tar -C /extractDir: (3.754397717s)
I1218 16:45:58.834975 108597 kic.go:168] duration metric: took 3.754693 seconds to extract preloaded images to volume
W1218 16:45:59.013852 108597 sshutil.go:59] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I1218 16:45:59.013925 108597 retry.go:31] will retry after 445.058653ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
W1218 16:45:59.497677 108597 sshutil.go:59] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I1218 16:45:59.497776 108597 retry.go:31] will retry after 318.170823ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
W1218 16:45:59.861155 108597 sshutil.go:59] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I1218 16:45:59.861226 108597 retry.go:31] will retry after 553.938121ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
W1218 16:46:00.484003 108597 sshutil.go:59] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I1218 16:46:00.484073 108597 retry.go:31] will retry after 755.539547ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
W1218 16:46:01.305909 108597 sshutil.go:59] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
W1218 16:46:01.306102 108597 start.go:258] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
W1218 16:46:01.306189 108597 start.go:240] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I1218 16:46:01.306268 108597 start.go:130] duration metric: createHost completed in 7.468572403s
I1218 16:46:01.306318 108597 start.go:81] releasing machines lock for "minikube", held for 7.468853894s
W1218 16:46:01.306875 108597 out.go:181] ๐ฟ Failed to start docker container. Running "minikube delete" may fix it: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset696547788 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset696547788: no such file or directory
: exit status 1
๐ฟ Failed to start docker container. Running "minikube delete" may fix it: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset696547788 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset696547788: no such file or directory
: exit status 1
I1218 16:46:01.325289 108597 out.go:119]
W1218 16:46:01.325646 108597 out.go:181] โ Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset696547788 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset696547788: no such file or directory
: exit status 1
โ Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset696547788 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset696547788: no such file or directory
: exit status 1
W1218 16:46:01.325839 108597 out.go:181]
W1218 16:46:01.325965 108597 out.go:181] ๐ฟ If the above advice does not help, please let us know:
๐ฟ If the above advice does not help, please let us know:
W1218 16:46:01.326124 108597 out.go:181] ๐ https://github.com/kubernetes/minikube/issues/new/choose
๐ https://github.com/kubernetes/minikube/issues/new/choose
I1218 16:46:01.336313 108597 out.go:119]
@esentai8 I wonder if you have installed minikube using snap ? we were aware of an issue of minikube with snap not having access to /tmp @spowelljr PR fixed it, https://github.com/kubernetes/minikube/pull/10042
do u mind trying this URL to see if this fixes your problem too ? https://storage.googleapis.com/minikube/latest/minikube-linux-amd64
@esentai8 I belive we have fixed this on the HEAd and it will be included on minikube 1.17.0
Cool, thank you! I will try one more time today using 1.17 and will write my feedback here
Had the same log output; the issue was caused by Docker for me. Purging and reinstalling docker through apt
following the install steps in official docker docs helped.
@esentai8 Did upgrading to latest minikube (v1.17.1) help here?
On Arch. minikube
1.17.1 installed from pacman has this.
minikube start --alsologtostderr -v=1
I0215 15:54:50.700144 141728 out.go:229] Setting OutFile to fd 1 ...
I0215 15:54:50.700377 141728 out.go:281] isatty.IsTerminal(1) = true
I0215 15:54:50.700406 141728 out.go:242] Setting ErrFile to fd 2...
I0215 15:54:50.700427 141728 out.go:281] isatty.IsTerminal(2) = true
I0215 15:54:50.700620 141728 root.go:291] Updating PATH: /home/horseinthesky/.minikube/bin
I0215 15:54:50.701366 141728 out.go:236] Setting JSON to false
I0215 15:54:50.702527 141728 start.go:106] hostinfo: {"hostname":"KappaA","uptime":1905885,"bootTime":1611487805,"procs":188,"os":"linux","platform":"arch","platformFamily":"arch","platformVersion":"","kernelVersion":"5.10.9-arch1-1","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"2da4b796-a999-4db9-86bd-766ff5de7268"}
I0215 15:54:50.702628 141728 start.go:116] virtualization: kvm host
I0215 15:54:50.703028 141728 out.go:119] ๐ minikube v1.17.1 on Arch
๐ minikube v1.17.1 on Arch
I0215 15:54:50.703228 141728 driver.go:315] Setting default libvirt URI to qemu:///system
I0215 15:54:50.703280 141728 global.go:102] Querying for installed drivers using PATH=/home/horseinthesky/.minikube/bin:/home/horseinthesky/.pyenv/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/home/horseinthesky/.fzf/bin:/home/horseinthesky/.local/bin
I0215 15:54:50.703322 141728 global.go:110] ssh priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0215 15:54:50.703484 141728 notify.go:126] Checking for updates...
I0215 15:54:50.703539 141728 global.go:110] virtualbox priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/}
I0215 15:54:50.703666 141728 global.go:110] vmware priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0215 15:54:50.759516 141728 docker.go:115] docker version: linux-20.10.2
I0215 15:54:50.759626 141728 cli_runner.go:111] Run: docker system info --format "{{json .}}"
I0215 15:54:50.898121 141728 info.go:253] docker info: {ID:VA7H:JSAZ:6GVA:ROAA:X5OG:4J5F:5VRP:KOWP:D7F4:V5VC:4FMM:BV6D Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:40 SystemTime:2021-02-15 15:54:50.814460676 +0300 MSK LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.10.9-arch1-1 OperatingSystem:Arch Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:16784654336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:KappaA Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b.m Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b.m} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio weight support WARNING: No blkio weight_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Experimental:true Name:buildx Path:/usr/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-tp-docker]] Warnings:<nil>}}
I0215 15:54:50.898278 141728 docker.go:145] overlay module found
I0215 15:54:50.898298 141728 global.go:110] docker priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0215 15:54:50.898366 141728 global.go:110] kvm2 priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "virsh": executable file not found in $PATH Reason: Fix:Install libvirt Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/}
I0215 15:54:50.910660 141728 global.go:110] none priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0215 15:54:51.043460 141728 podman.go:118] podman version: 3.0.0
I0215 15:54:51.043531 141728 global.go:110] podman priority: 3, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0215 15:54:51.043575 141728 driver.go:261] "docker" has a higher priority (9) than "" (0)
I0215 15:54:51.043590 141728 driver.go:257] not recommending "ssh" due to priority: 4
I0215 15:54:51.043601 141728 driver.go:257] not recommending "none" due to priority: 4
I0215 15:54:51.043612 141728 driver.go:257] not recommending "podman" due to priority: 3
I0215 15:54:51.043636 141728 driver.go:286] Picked: docker
I0215 15:54:51.043666 141728 driver.go:287] Alternatives: [ssh none podman (experimental)]
I0215 15:54:51.043685 141728 driver.go:288] Rejects: [virtualbox vmware kvm2]
I0215 15:54:51.043884 141728 out.go:119] โจ Automatically selected the docker driver. Other choices: ssh, none, podman (experimental)
โจ Automatically selected the docker driver. Other choices: ssh, none, podman (experimental)
I0215 15:54:51.043904 141728 start.go:279] selected driver: docker
I0215 15:54:51.043914 141728 start.go:702] validating driver "docker" against <nil>
I0215 15:54:51.043936 141728 start.go:713] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0215 15:54:51.044026 141728 cli_runner.go:111] Run: docker system info --format "{{json .}}"
I0215 15:54:51.162330 141728 info.go:253] docker info: {ID:VA7H:JSAZ:6GVA:ROAA:X5OG:4J5F:5VRP:KOWP:D7F4:V5VC:4FMM:BV6D Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:40 SystemTime:2021-02-15 15:54:51.081326301 +0300 MSK LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.10.9-arch1-1 OperatingSystem:Arch Linux OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:4 MemTotal:16784654336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:KappaA Labels:[] ExperimentalBuild:false ServerVersion:20.10.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b.m Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b.m} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio weight support WARNING: No blkio weight_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/lib/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Experimental:true Name:buildx Path:/usr/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-tp-docker]] Warnings:<nil>}}
I0215 15:54:51.162486 141728 start_flags.go:249] no existing cluster config was found, will generate one from the flags
I0215 15:54:51.162989 141728 start_flags.go:267] Using suggested 4000MB memory alloc based on sys=16007MB, container=16007MB
I0215 15:54:51.163160 141728 start_flags.go:671] Wait components to verify : map[apiserver:true system_pods:true]
I0215 15:54:51.163189 141728 cni.go:74] Creating CNI manager for ""
I0215 15:54:51.163206 141728 cni.go:139] CNI unnecessary in this configuration, recommending no CNI
I0215 15:54:51.163219 141728 start_flags.go:390] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false}
I0215 15:54:51.163562 141728 out.go:119] ๐ Starting control plane node minikube in cluster minikube
๐ Starting control plane node minikube in cluster minikube
I0215 15:54:51.203791 141728 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
I0215 15:54:51.203829 141728 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping pull
I0215 15:54:51.203844 141728 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0215 15:54:51.203895 141728 preload.go:105] Found local preload: /home/horseinthesky/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4
I0215 15:54:51.203910 141728 cache.go:54] Caching tarball of preloaded images
I0215 15:54:51.203926 141728 preload.go:131] Found /home/horseinthesky/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0215 15:54:51.203938 141728 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker
I0215 15:54:51.204421 141728 profile.go:148] Saving config to /home/horseinthesky/.minikube/profiles/minikube/config.json ...
I0215 15:54:51.204466 141728 lock.go:36] WriteFile acquiring /home/horseinthesky/.minikube/profiles/minikube/config.json: {Name:mk5cb7d5c4de4d8ca19444cc45be9caa15c4f0dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0215 15:54:51.204865 141728 cache.go:185] Successfully downloaded all kic artifacts
I0215 15:54:51.204895 141728 start.go:313] acquiring machines lock for minikube: {Name:mk2ca001533c4eb542be71a7f8bf1a970f80797f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0215 15:54:51.205006 141728 start.go:317] acquired machines lock for "minikube" in 90.343ยตs
I0215 15:54:51.205035 141728 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0215 15:54:51.205113 141728 start.go:126] createHost starting for "" (driver="docker")
I0215 15:54:51.205419 141728 out.go:140] ๐ฅ Creating docker container (CPUs=2, Memory=4000MB) ...
๐ฅ Creating docker container (CPUs=2, Memory=4000MB) ...| I0215 15:54:51.205673 141728 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0215 15:54:51.205708 141728 client.go:168] LocalClient.Create starting
I0215 15:54:51.205796 141728 main.go:119] libmachine: Reading certificate data from /home/horseinthesky/.minikube/certs/ca.pem
I0215 15:54:51.205846 141728 main.go:119] libmachine: Decoding PEM data...
I0215 15:54:51.205872 141728 main.go:119] libmachine: Parsing certificate...
I0215 15:54:51.206256 141728 main.go:119] libmachine: Reading certificate data from /home/horseinthesky/.minikube/certs/cert.pem
I0215 15:54:51.206301 141728 main.go:119] libmachine: Decoding PEM data...
I0215 15:54:51.206324 141728 main.go:119] libmachine: Parsing certificate...
I0215 15:54:51.206732 141728 cli_runner.go:111] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}"
W0215 15:54:51.246123 141728 cli_runner.go:149] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" returned with exit code 1
I0215 15:54:51.246368 141728 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs...
I0215 15:54:51.246418 141728 cli_runner.go:111] Run: docker network inspect minikube
W0215 15:54:51.284560 141728 cli_runner.go:149] docker network inspect minikube returned with exit code 1
I0215 15:54:51.284609 141728 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]
stderr:
Error: No such network: minikube
I0215 15:54:51.284632 141728 network_create.go:254] output of [docker network inspect minikube]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: minikube
** /stderr **
I0215 15:54:51.284693 141728 cli_runner.go:111] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}"
/ I0215 15:54:51.323531 141728 network_create.go:104] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ...
I0215 15:54:51.323646 141728 cli_runner.go:111] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
/ I0215 15:54:51.739199 141728 kic.go:100] calculated static IP "192.168.49.2" for the "minikube" container
I0215 15:54:51.739292 141728 cli_runner.go:111] Run: docker ps -a --format {{.Names}}
I0215 15:54:51.777114 141728 cli_runner.go:111] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
- I0215 15:54:51.818736 141728 oci.go:102] Successfully created a docker volume minikube
I0215 15:54:51.818826 141728 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
| I0215 15:54:52.443204 141728 oci.go:106] Successfully prepared a docker volume minikube
W0215 15:54:52.443279 141728 oci.go:159] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0215 15:54:52.443292 141728 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0215 15:54:52.443338 141728 preload.go:105] Found local preload: /home/horseinthesky/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4
I0215 15:54:52.443345 141728 cli_runner.go:111] Run: docker info --format "'{{json .SecurityOptions}}'"
I0215 15:54:52.443351 141728 kic.go:163] Starting extracting preloaded images to volume ...
I0215 15:54:52.443407 141728 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/horseinthesky/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -I lz4 -xf /preloaded.tar -C /extractDir
/ I0215 15:54:52.587290 141728 cli_runner.go:111] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
\ W0215 15:54:52.803994 141728 cli_runner.go:149] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e returned with exit code 125
I0215 15:54:52.804082 141728 client.go:171] LocalClient.Create took 1.598363034s
\ I0215 15:54:54.804420 141728 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0215 15:54:54.804535 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
| W0215 15:54:54.848772 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:54:54.848907 141728 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
\ I0215 15:54:55.125578 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0215 15:54:55.166461 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:54:55.166590 141728 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
| I0215 15:54:55.707173 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
/ W0215 15:54:55.752964 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:54:55.753072 141728 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
\ I0215 15:54:56.408378 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
| W0215 15:54:56.449776 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:54:56.449904 141728 retry.go:31] will retry after 791.196345ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
| I0215 15:54:57.241399 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0215 15:54:57.283801 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
W0215 15:54:57.284095 141728 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
W0215 15:54:57.284293 141728 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
I0215 15:54:57.284331 141728 start.go:129] duration metric: createHost completed in 6.079203484s
I0215 15:54:57.284352 141728 start.go:80] releasing machines lock for "minikube", held for 6.079323429s
W0215 15:54:57.284383 141728 start.go:377] error starting host: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e: exit status 125
stdout:
060bdb7dba4d2e1c05863cd7f92c96acbd63e826f01fad741f51382a97db23e0
stderr:
docker: Error response from daemon: driver failed programming external connectivity on endpoint minikube (fb5c024ea6c348ce7a077e187d50a60395704595150ee87a157271808775fb66): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 49222 -j DNAT --to-destination 192.168.49.2:8443 ! -i br-77354a4a7612: iptables v1.8.7 (legacy): unknown option "--dport"
Try `iptables -h' or 'iptables --help' for more information.
(exit status 2)).
I0215 15:54:57.284919 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
/ W0215 15:54:57.326228 141728 start.go:382] delete host: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
W0215 15:54:57.326466 141728 out.go:181] ๐คฆ StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e: exit status 125
stdout:
060bdb7dba4d2e1c05863cd7f92c96acbd63e826f01fad741f51382a97db23e0
stderr:
docker: Error response from daemon: driver failed programming external connectivity on endpoint minikube (fb5c024ea6c348ce7a077e187d50a60395704595150ee87a157271808775fb66): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 49222 -j DNAT --to-destination 192.168.49.2:8443 ! -i br-77354a4a7612: iptables v1.8.7 (legacy): unknown option "--dport"
Try `iptables -h' or 'iptables --help' for more information.
(exit status 2)).
๐คฆ StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e: exit status 125
stdout:
060bdb7dba4d2e1c05863cd7f92c96acbd63e826f01fad741f51382a97db23e0
stderr:
docker: Error response from daemon: driver failed programming external connectivity on endpoint minikube (fb5c024ea6c348ce7a077e187d50a60395704595150ee87a157271808775fb66): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 49222 -j DNAT --to-destination 192.168.49.2:8443 ! -i br-77354a4a7612: iptables v1.8.7 (legacy): unknown option "--dport"
Try `iptables -h' or 'iptables --help' for more information.
(exit status 2)).
I0215 15:54:57.326954 141728 start.go:392] Will try again in 5 seconds ...
I0215 15:54:58.883795 141728 cli_runner.go:155] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/horseinthesky/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -I lz4 -xf /preloaded.tar -C /extractDir: (6.440331377s)
I0215 15:54:58.883836 141728 kic.go:172] duration metric: took 6.440482 seconds to extract preloaded images to volume
I0215 15:55:02.327139 141728 start.go:313] acquiring machines lock for minikube: {Name:mk2ca001533c4eb542be71a7f8bf1a970f80797f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0215 15:55:02.327321 141728 start.go:317] acquired machines lock for "minikube" in 124.632ยตs
I0215 15:55:02.327375 141728 start.go:93] Skipping create...Using existing machine configuration
I0215 15:55:02.327395 141728 fix.go:54] fixHost starting:
I0215 15:55:02.327748 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:02.367882 141728 fix.go:107] recreateIfNeeded on minikube: state= err=<nil>
I0215 15:55:02.367943 141728 fix.go:112] machineExists: false. err=machine does not exist
I0215 15:55:02.368132 141728 out.go:119] ๐คท docker "minikube" container is missing, will recreate.
๐คท docker "minikube" container is missing, will recreate.
I0215 15:55:02.368153 141728 delete.go:124] DEMOLISHING minikube ...
I0215 15:55:02.368236 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:02.407846 141728 stop.go:79] host is in state
I0215 15:55:02.407961 141728 main.go:119] libmachine: Stopping "minikube"...
I0215 15:55:02.408038 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:02.447816 141728 kic_runner.go:94] Run: systemctl --version
I0215 15:55:02.447848 141728 kic_runner.go:115] Args: [docker exec --privileged minikube systemctl --version]
I0215 15:55:02.490640 141728 kic_runner.go:94] Run: sudo service kubelet stop
I0215 15:55:02.490676 141728 kic_runner.go:115] Args: [docker exec --privileged minikube sudo service kubelet stop]
I0215 15:55:02.532473 141728 openrc.go:141] stop output:
** stderr **
Error response from daemon: Container 060bdb7dba4d2e1c05863cd7f92c96acbd63e826f01fad741f51382a97db23e0 is not running
** /stderr **
W0215 15:55:02.532515 141728 kic.go:421] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
stdout:
stderr:
Error response from daemon: Container 060bdb7dba4d2e1c05863cd7f92c96acbd63e826f01fad741f51382a97db23e0 is not running
I0215 15:55:02.532578 141728 kic_runner.go:94] Run: sudo service kubelet stop
I0215 15:55:02.532595 141728 kic_runner.go:115] Args: [docker exec --privileged minikube sudo service kubelet stop]
I0215 15:55:02.573498 141728 openrc.go:141] stop output:
** stderr **
Error response from daemon: Container 060bdb7dba4d2e1c05863cd7f92c96acbd63e826f01fad741f51382a97db23e0 is not running
** /stderr **
W0215 15:55:02.573533 141728 kic.go:423] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 1
stdout:
stderr:
Error response from daemon: Container 060bdb7dba4d2e1c05863cd7f92c96acbd63e826f01fad741f51382a97db23e0 is not running
I0215 15:55:02.573625 141728 kic_runner.go:94] Run: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
I0215 15:55:02.573642 141728 kic_runner.go:115] Args: [docker exec --privileged minikube docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}]
I0215 15:55:02.614394 141728 kic.go:434] unable list containers : docker: docker ps -a --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}: exit status 1
stdout:
stderr:
Error response from daemon: Container 060bdb7dba4d2e1c05863cd7f92c96acbd63e826f01fad741f51382a97db23e0 is not running
I0215 15:55:02.614432 141728 kic.go:444] successfully stopped kubernetes!
I0215 15:55:02.614509 141728 kic_runner.go:94] Run: pgrep kube-apiserver
I0215 15:55:02.614535 141728 kic_runner.go:115] Args: [docker exec --privileged minikube pgrep kube-apiserver]
I0215 15:55:02.693520 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:05.733671 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:08.774948 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:11.815837 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:14.868233 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:17.908067 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:20.947699 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:23.988198 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:27.030032 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:30.071887 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:33.111911 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:36.151990 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:39.192138 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:42.247640 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:45.287801 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:48.329939 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:51.371511 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:54.413004 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:55:57.452706 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:00.492242 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:03.531632 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:06.573247 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:09.611685 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:12.656459 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:15.698042 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:18.737119 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:21.775737 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:24.817667 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:27.860540 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:30.900689 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:33.942138 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:36.983690 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:40.024177 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:43.064350 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:46.104045 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:49.147793 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:52.187863 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:55.228114 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:56:58.268758 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:01.312797 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:04.353035 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:07.393737 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:10.434750 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:13.475380 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:16.531328 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:19.571359 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:22.611261 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:25.652069 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:28.693753 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:31.732792 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:34.772867 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:37.815143 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:40.857220 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:43.898159 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:46.937568 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:49.978898 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:53.021424 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:56.073195 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:57:59.113384 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:58:02.153166 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:58:05.193011 141728 stop.go:59] stop err: Maximum number of retries (60) exceeded
I0215 15:58:05.193072 141728 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
I0215 15:58:05.193599 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
W0215 15:58:05.236766 141728 delete.go:135] deletehost failed: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I0215 15:58:05.236872 141728 cli_runner.go:111] Run: docker container inspect -f {{.Id}} minikube
I0215 15:58:05.277447 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:58:05.317933 141728 cli_runner.go:111] Run: docker exec --privileged -t minikube /bin/bash -c "sudo init 0"
W0215 15:58:05.358877 141728 cli_runner.go:149] docker exec --privileged -t minikube /bin/bash -c "sudo init 0" returned with exit code 1
I0215 15:58:05.359098 141728 oci.go:600] error shutdown minikube: docker exec --privileged -t minikube /bin/bash -c "sudo init 0": exit status 1
stdout:
stderr:
Error response from daemon: Container 060bdb7dba4d2e1c05863cd7f92c96acbd63e826f01fad741f51382a97db23e0 is not running
I0215 15:58:06.359676 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:58:06.399836 141728 oci.go:614] temporary error: container minikube status is but expect it to be exited
I0215 15:58:06.399877 141728 oci.go:620] Successfully shutdown container minikube
I0215 15:58:06.399932 141728 cli_runner.go:111] Run: docker rm -f -v minikube
I0215 15:58:06.443593 141728 cli_runner.go:111] Run: docker container inspect -f {{.Id}} minikube
W0215 15:58:06.481343 141728 cli_runner.go:149] docker container inspect -f {{.Id}} minikube returned with exit code 1
I0215 15:58:06.481460 141728 cli_runner.go:111] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}"
I0215 15:58:06.520279 141728 cli_runner.go:111] Run: docker network rm minikube
W0215 15:58:06.955971 141728 delete.go:139] delete failed (probably ok) <nil>
I0215 15:58:06.956012 141728 fix.go:119] Sleeping 1 second for extra luck!
I0215 15:58:07.956178 141728 start.go:126] createHost starting for "" (driver="docker")
I0215 15:58:07.956523 141728 out.go:140] ๐ฅ Creating docker container (CPUs=2, Memory=4000MB) ...
๐ฅ Creating docker container (CPUs=2, Memory=4000MB) ...| I0215 15:58:07.956721 141728 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0215 15:58:07.956794 141728 client.go:168] LocalClient.Create starting
I0215 15:58:07.956926 141728 main.go:119] libmachine: Reading certificate data from /home/horseinthesky/.minikube/certs/ca.pem
I0215 15:58:07.957012 141728 main.go:119] libmachine: Decoding PEM data...
I0215 15:58:07.957056 141728 main.go:119] libmachine: Parsing certificate...
I0215 15:58:07.957292 141728 main.go:119] libmachine: Reading certificate data from /home/horseinthesky/.minikube/certs/cert.pem
I0215 15:58:07.957347 141728 main.go:119] libmachine: Decoding PEM data...
I0215 15:58:07.957383 141728 main.go:119] libmachine: Parsing certificate...
I0215 15:58:07.957840 141728 cli_runner.go:111] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}"
W0215 15:58:07.998799 141728 cli_runner.go:149] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}" returned with exit code 1
I0215 15:58:07.998913 141728 network_create.go:249] running [docker network inspect minikube] to gather additional debugging logs...
I0215 15:58:07.998960 141728 cli_runner.go:111] Run: docker network inspect minikube
W0215 15:58:08.038150 141728 cli_runner.go:149] docker network inspect minikube returned with exit code 1
I0215 15:58:08.038200 141728 network_create.go:252] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]
stderr:
Error: No such network: minikube
I0215 15:58:08.038225 141728 network_create.go:254] output of [docker network inspect minikube]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: minikube
** /stderr **
I0215 15:58:08.038306 141728 cli_runner.go:111] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}"
/ I0215 15:58:08.078419 141728 network_create.go:104] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ...
I0215 15:58:08.078522 141728 cli_runner.go:111] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
/ I0215 15:58:08.499231 141728 kic.go:100] calculated static IP "192.168.49.2" for the "minikube" container
I0215 15:58:08.499329 141728 cli_runner.go:111] Run: docker ps -a --format {{.Names}}
I0215 15:58:08.538957 141728 cli_runner.go:111] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
- I0215 15:58:08.577788 141728 oci.go:102] Successfully created a docker volume minikube
I0215 15:58:08.577887 141728 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
\ I0215 15:58:09.104360 141728 oci.go:106] Successfully prepared a docker volume minikube
W0215 15:58:09.104424 141728 oci.go:159] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0215 15:58:09.104440 141728 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0215 15:58:09.104486 141728 preload.go:105] Found local preload: /home/horseinthesky/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4
I0215 15:58:09.104498 141728 cli_runner.go:111] Run: docker info --format "'{{json .SecurityOptions}}'"
I0215 15:58:09.104502 141728 kic.go:163] Starting extracting preloaded images to volume ...
I0215 15:58:09.104557 141728 cli_runner.go:111] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/horseinthesky/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -I lz4 -xf /preloaded.tar -C /extractDir
| I0215 15:58:09.239349 141728 cli_runner.go:111] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
- W0215 15:58:09.445368 141728 cli_runner.go:149] docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e returned with exit code 125
I0215 15:58:09.445469 141728 client.go:171] LocalClient.Create took 1.488656381s
- I0215 15:58:11.445803 141728 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0215 15:58:11.446028 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
\ W0215 15:58:11.485178 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:58:11.485330 141728 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
/ I0215 15:58:11.716760 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0215 15:58:11.758488 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:58:11.758632 141728 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
- I0215 15:58:12.204014 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0215 15:58:12.250872 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:58:12.250984 141728 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
- I0215 15:58:12.569409 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0215 15:58:12.614232 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:58:12.614619 141728 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
\ I0215 15:58:13.169367 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
| W0215 15:58:13.210644 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:58:13.210781 141728 retry.go:31] will retry after 755.539547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
\ I0215 15:58:13.966650 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
| W0215 15:58:14.007927 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
W0215 15:58:14.008052 141728 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
W0215 15:58:14.008080 141728 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
I0215 15:58:14.008099 141728 start.go:129] duration metric: createHost completed in 6.051852516s
I0215 15:58:14.008168 141728 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0215 15:58:14.008219 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0215 15:58:14.049292 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:58:14.049515 141728 retry.go:31] will retry after 200.227965ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
- I0215 15:58:14.250049 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
\ W0215 15:58:14.289635 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:58:14.289753 141728 retry.go:31] will retry after 380.704736ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
- I0215 15:58:14.670754 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
\ W0215 15:58:14.713536 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:58:14.713670 141728 retry.go:31] will retry after 738.922478ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
- I0215 15:58:15.452888 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
\ W0215 15:58:15.493312 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I0215 15:58:15.493426 141728 retry.go:31] will retry after 602.660142ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
| I0215 15:58:15.992247 141728 cli_runner.go:155] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/horseinthesky/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -I lz4 -xf /preloaded.tar -C /extractDir: (6.887645404s)
I0215 15:58:15.992305 141728 kic.go:172] duration metric: took 6.887800 seconds to extract preloaded images to volume
/ I0215 15:58:16.096360 141728 cli_runner.go:111] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W0215 15:58:16.137732 141728 cli_runner.go:149] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
W0215 15:58:16.137923 141728 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
W0215 15:58:16.138119 141728 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:
stderr:
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
I0215 15:58:16.138276 141728 fix.go:56] fixHost completed within 3m13.810877785s
I0215 15:58:16.138328 141728 start.go:80] releasing machines lock for "minikube", held for 3m13.810968684s
W0215 15:58:16.138665 141728 out.go:181] ๐ฟ Failed to start docker container. Running "minikube delete" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e: exit status 125
stdout:
2dd0c42d7373079948a52a9c39a2d27ef8598678cb05e0e00874a5f38630ef58
stderr:
docker: Error response from daemon: driver failed programming external connectivity on endpoint minikube (f6b98b542c858c23f28acd359bddb75f33c76722b2f87ec5317cc02d407bbbe6): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 49232 -j DNAT --to-destination 192.168.49.2:8443 ! -i br-a9e39f74bff3: iptables v1.8.7 (legacy): unknown option "--dport"
Try `iptables -h' or 'iptables --help' for more information.
(exit status 2)).
๐ฟ Failed to start docker container. Running "minikube delete" may fix it: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e: exit status 125
stdout:
2dd0c42d7373079948a52a9c39a2d27ef8598678cb05e0e00874a5f38630ef58
stderr:
docker: Error response from daemon: driver failed programming external connectivity on endpoint minikube (f6b98b542c858c23f28acd359bddb75f33c76722b2f87ec5317cc02d407bbbe6): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 49232 -j DNAT --to-destination 192.168.49.2:8443 ! -i br-a9e39f74bff3: iptables v1.8.7 (legacy): unknown option "--dport"
Try `iptables -h' or 'iptables --help' for more information.
(exit status 2)).
W0215 15:58:16.139535 141728 out.go:181] โ Startup with docker driver failed, trying with alternate driver ssh: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e: exit status 125
stdout:
2dd0c42d7373079948a52a9c39a2d27ef8598678cb05e0e00874a5f38630ef58
stderr:
docker: Error response from daemon: driver failed programming external connectivity on endpoint minikube (f6b98b542c858c23f28acd359bddb75f33c76722b2f87ec5317cc02d407bbbe6): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 49232 -j DNAT --to-destination 192.168.49.2:8443 ! -i br-a9e39f74bff3: iptables v1.8.7 (legacy): unknown option "--dport"
Try `iptables -h' or 'iptables --help' for more information.
(exit status 2)).
โ Startup with docker driver failed, trying with alternate driver ssh: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e: exit status 125
stdout:
2dd0c42d7373079948a52a9c39a2d27ef8598678cb05e0e00874a5f38630ef58
stderr:
docker: Error response from daemon: driver failed programming external connectivity on endpoint minikube (f6b98b542c858c23f28acd359bddb75f33c76722b2f87ec5317cc02d407bbbe6): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 49232 -j DNAT --to-destination 192.168.49.2:8443 ! -i br-a9e39f74bff3: iptables v1.8.7 (legacy): unknown option "--dport"
Try `iptables -h' or 'iptables --help' for more information.
(exit status 2)).
I0215 15:58:16.140577 141728 delete.go:280] Deleting minikube
I0215 15:58:16.140661 141728 delete.go:285] minikube configuration: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false}
W0215 15:58:16.140893 141728 register.go:127] "Deleting" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I0215 15:58:16.141035 141728 out.go:119] ๐ฅ Deleting "minikube" in docker ...
๐ฅ Deleting "minikube" in docker ...
I0215 15:58:16.141151 141728 delete.go:243] deleting possible KIC leftovers for minikube (driver=docker) ...
I0215 15:58:16.141248 141728 cli_runner.go:111] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}}
W0215 15:58:16.179793 141728 register.go:127] "Deleting" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I0215 15:58:16.179904 141728 out.go:119] ๐ฅ Deleting container "minikube" ...
๐ฅ Deleting container "minikube" ...
I0215 15:58:16.180294 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:58:16.218052 141728 cli_runner.go:111] Run: docker exec --privileged -t minikube /bin/bash -c "sudo init 0"
W0215 15:58:16.258200 141728 cli_runner.go:149] docker exec --privileged -t minikube /bin/bash -c "sudo init 0" returned with exit code 1
I0215 15:58:16.258273 141728 oci.go:600] error shutdown minikube: docker exec --privileged -t minikube /bin/bash -c "sudo init 0": exit status 1
stdout:
stderr:
Error response from daemon: Container 2dd0c42d7373079948a52a9c39a2d27ef8598678cb05e0e00874a5f38630ef58 is not running
I0215 15:58:17.258581 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
I0215 15:58:17.298625 141728 oci.go:614] temporary error: container minikube status is but expect it to be exited
I0215 15:58:17.298689 141728 oci.go:620] Successfully shutdown container minikube
I0215 15:58:17.298759 141728 cli_runner.go:111] Run: docker rm -f -v minikube
I0215 15:58:17.343172 141728 volumes.go:36] trying to delete all docker volumes with label name.minikube.sigs.k8s.io=minikube
I0215 15:58:17.343289 141728 cli_runner.go:111] Run: docker volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}
I0215 15:58:17.382042 141728 cli_runner.go:111] Run: docker volume rm --force minikube
I0215 15:58:17.833744 141728 cli_runner.go:111] Run: docker network ls --filter=label=created_by.minikube.sigs.k8s.io --format {{.Name}}
I0215 15:58:17.878080 141728 cli_runner.go:111] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}},{{$first := true}} "ContainerIPs": [{{range $k,$v := .Containers }}{{if $first}}{{$first = false}}{{else}}, {{end}}"{{$v.IPv4Address}}"{{end}}]}"
I0215 15:58:17.920996 141728 cli_runner.go:111] Run: docker network rm minikube
I0215 15:58:18.276134 141728 volumes.go:58] trying to prune all docker volumes with label name.minikube.sigs.k8s.io=minikube
I0215 15:58:18.276218 141728 cli_runner.go:111] Run: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube
I0215 15:58:18.314995 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
W0215 15:58:18.354663 141728 cli_runner.go:149] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0215 15:58:18.354733 141728 delete.go:82] Unable to get host status for minikube, assuming it has already been deleted: state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:
stderr:
Error: No such container: minikube
W0215 15:58:18.354950 141728 register.go:127] "Deleting" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I0215 15:58:18.355013 141728 out.go:119] ๐ฅ Removing /home/horseinthesky/.minikube/machines/minikube ...
๐ฅ Removing /home/horseinthesky/.minikube/machines/minikube ...
W0215 15:58:18.355555 141728 register.go:127] "Deleting" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I0215 15:58:18.355649 141728 out.go:119] ๐ Removed all traces of the "minikube" cluster.
๐ Removed all traces of the "minikube" cluster.
I0215 15:58:18.355734 141728 start.go:279] selected driver: ssh
I0215 15:58:18.355769 141728 start.go:702] validating driver "ssh" against <nil>
I0215 15:58:18.355816 141728 start.go:713] status for ssh: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0215 15:58:18.355897 141728 start_flags.go:249] no existing cluster config was found, will generate one from the flags
I0215 15:58:18.355959 141728 start_flags.go:267] Using suggested 4000MB memory alloc based on sys=16007MB, container=0MB
I0215 15:58:18.356135 141728 start_flags.go:671] Wait components to verify : map[apiserver:true system_pods:true]
I0215 15:58:18.356161 141728 cni.go:74] Creating CNI manager for ""
I0215 15:58:18.356173 141728 cni.go:139] CNI unnecessary in this configuration, recommending no CNI
I0215 15:58:18.356192 141728 start_flags.go:390] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false}
I0215 15:58:18.356361 141728 out.go:119] ๐ Starting control plane node minikube in cluster minikube
๐ Starting control plane node minikube in cluster minikube
I0215 15:58:18.356381 141728 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0215 15:58:18.356415 141728 preload.go:105] Found local preload: /home/horseinthesky/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4
I0215 15:58:18.356431 141728 cache.go:54] Caching tarball of preloaded images
I0215 15:58:18.356449 141728 preload.go:131] Found /home/horseinthesky/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0215 15:58:18.356466 141728 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker
I0215 15:58:18.356549 141728 profile.go:148] Saving config to /home/horseinthesky/.minikube/profiles/minikube/config.json ...
I0215 15:58:18.356638 141728 lock.go:36] WriteFile acquiring /home/horseinthesky/.minikube/profiles/minikube/config.json: {Name:mk5cb7d5c4de4d8ca19444cc45be9caa15c4f0dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0215 15:58:18.356775 141728 cache.go:185] Successfully downloaded all kic artifacts
I0215 15:58:18.356802 141728 start.go:313] acquiring machines lock for minikube: {Name:mk0fd380a2c4cd928c48cb0c23d44a4902deaf84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0215 15:58:18.356864 141728 start.go:317] acquired machines lock for "minikube" in 41.08ยตs
I0215 15:58:18.356886 141728 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0215 15:58:18.356946 141728 start.go:126] createHost starting for "" (driver="ssh")
I0215 15:58:18.356983 141728 start.go:129] duration metric: createHost completed in 24.128ยตs
I0215 15:58:18.356996 141728 start.go:80] releasing machines lock for "minikube", held for 115.364ยตs
W0215 15:58:18.357016 141728 start.go:377] error starting host: config: please provide an IP address
I0215 15:58:18.357074 141728 cli_runner.go:111] Run: docker container inspect minikube --format={{.State.Status}}
W0215 15:58:18.396332 141728 cli_runner.go:149] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I0215 15:58:18.396419 141728 delete.go:46] couldn't inspect container "minikube" before deleting: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:
stderr:
Error: No such container: minikube
I0215 15:58:18.396500 141728 cli_runner.go:111] Run: sudo -n podman container inspect minikube --format={{.State.Status}}
W0215 15:58:18.533216 141728 cli_runner.go:149] sudo -n podman container inspect minikube --format={{.State.Status}} returned with exit code 125
I0215 15:58:18.533458 141728 delete.go:46] couldn't inspect container "minikube" before deleting: unknown state "minikube": sudo -n podman container inspect minikube --format={{.State.Status}}: exit status 125
stdout:
stderr:
time="2021-02-15T15:58:18+03:00" level=error msg="The storage 'driver' option must be set in /etc/containers/storage.conf, guarantee proper operation."
Error: error inspecting object: no such container minikube
W0215 15:58:18.533830 141728 start.go:382] delete host: Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
W0215 15:58:18.534150 141728 out.go:181] ๐คฆ StartHost failed, but will try again: config: please provide an IP address
๐คฆ StartHost failed, but will try again: config: please provide an IP address
I0215 15:58:18.534462 141728 start.go:392] Will try again in 5 seconds ...
I0215 15:58:23.534834 141728 start.go:313] acquiring machines lock for minikube: {Name:mk0fd380a2c4cd928c48cb0c23d44a4902deaf84 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0215 15:58:23.535025 141728 start.go:317] acquired machines lock for "minikube" in 123.55ยตs
I0215 15:58:23.535056 141728 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0215 15:58:23.535166 141728 start.go:126] createHost starting for "" (driver="ssh")
I0215 15:58:23.535248 141728 start.go:129] duration metric: createHost completed in 55.674ยตs
I0215 15:58:23.535314 141728 start.go:80] releasing machines lock for "minikube", held for 269.332ยตs
W0215 15:58:23.535570 141728 out.go:181] ๐ฟ Failed to start ssh bare metal machine. Running "minikube delete" may fix it: config: please provide an IP address
๐ฟ Failed to start ssh bare metal machine. Running "minikube delete" may fix it: config: please provide an IP address
W0215 15:58:23.535743 141728 out.go:181] โ Startup with ssh driver failed, trying with alternate driver none: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e: exit status 125
stdout:
2dd0c42d7373079948a52a9c39a2d27ef8598678cb05e0e00874a5f38630ef58
stderr:
docker: Error response from daemon: driver failed programming external connectivity on endpoint minikube (f6b98b542c858c23f28acd359bddb75f33c76722b2f87ec5317cc02d407bbbe6): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 49232 -j DNAT --to-destination 192.168.49.2:8443 ! -i br-a9e39f74bff3: iptables v1.8.7 (legacy): unknown option "--dport"
Try `iptables -h' or 'iptables --help' for more information.
(exit status 2)).
โ Startup with ssh driver failed, trying with alternate driver none: Failed to start host: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e: exit status 125
stdout:
2dd0c42d7373079948a52a9c39a2d27ef8598678cb05e0e00874a5f38630ef58
stderr:
docker: Error response from daemon: driver failed programming external connectivity on endpoint minikube (f6b98b542c858c23f28acd359bddb75f33c76722b2f87ec5317cc02d407bbbe6): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 49232 -j DNAT --to-destination 192.168.49.2:8443 ! -i br-a9e39f74bff3: iptables v1.8.7 (legacy): unknown option "--dport"
Try `iptables -h' or 'iptables --help' for more information.
(exit status 2)).
I0215 15:58:23.536001 141728 delete.go:280] Deleting minikube
I0215 15:58:23.536032 141728 delete.go:285] minikube configuration: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:ssh HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false}
W0215 15:58:23.536386 141728 register.go:127] "Deleting" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
I0215 15:58:23.536464 141728 out.go:119] ๐ Uninstalling Kubernetes v1.20.2 using kubeadm ...
๐ Uninstalling Kubernetes v1.20.2 using kubeadm ...
I0215 15:58:23.536491 141728 host.go:66] Checking if "minikube" exists ...
W0215 15:58:23.536624 141728 out.go:181] โ Failed to delete cluster minikube, proceeding with retry anyway.
โ Failed to delete cluster minikube, proceeding with retry anyway.
I0215 15:58:23.536649 141728 start.go:279] selected driver: none
I0215 15:58:23.536664 141728 start.go:702] validating driver "none" against <nil>
I0215 15:58:23.536690 141728 start.go:713] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0215 15:58:23.536728 141728 start.go:1217] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
I0215 15:58:23.537119 141728 out.go:119]
W0215 15:58:23.537276 141728 out.go:181] โ Exiting due to GUEST_MISSING_CONNTRACK: Sorry, Kubernetes 1.20.2 requires conntrack to be installed in root's path
โ Exiting due to GUEST_MISSING_CONNTRACK: Sorry, Kubernetes 1.20.2 requires conntrack to be installed in root's path
I0215 15:58:23.537302 141728 out.go:119]
Docker service is up and running. Is this relative to the issue?
@horseinthesky based on your output I'd check whether minikube network is created among docker networks (docker network ls
). If yes, you might want to check what is in the way of the network communication, might be firewall.
@LordBertson While it is trying to create a cluster I see the minikube network in the list
docker network ls
NETWORK ID NAME DRIVER SCOPE
1766f5c232b8 bridge bridge local
63963d64db7c host host local
b8e5a7301992 minikube bridge local
97909e90a0f4 none null local
iptables looks like this
sudo iptables -L
Alias tip: _ iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Everything seems fine.
After minikube fails to create a cluster the network and the container are deleted.
@horseinthesky in that case I'd try the default: docker system prune
, minikube delete
and then minikube start --driver=docker
. Then another question would be whether docker in general works. Say, can you start other docker containers, exec into them (docker exec -it CONTAINER_ID /bin/bash
) and such?
I'd definitely consider uninstalling Minikube and installing from scratch, here I'd go for official binaries (described here).
If that does not help and your current set up allows it, I would consider reinstalling docker from scratch, as described in their official docs, but since Arch is not covered you can also just use it as a reference material and find respective packages in your preferred package manager I suppose.
@LordBertson I've tried all these steps but this doesn't help. Docker works perfectly fine by itself. Running containers with no issues.
I've opened https://github.com/kubernetes/minikube/issues/10496 since it seems different issue.
@esentai8 did you have a chance to try minikube 1.17.1 or 1.18.0? Can we close this issue?
Yes, issue is resolved. Thank you!
Excellent, I'll go ahead and close this issue then.
I am using Ubuntu 20.04 LTS
Steps to reproduce the issue:
1.Installed kubectl according to the instructions from the official website
Full output of failed command:
Full output of
minikube start
command used, if not already included:Optional: Full output of
minikube logs
command:I tried many possible solutions available on the internet (e.g. deleting minikube and restarting, destroying all docker servers running and restarting etc.)