kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.44k stars 4.89k forks source link

[Windows 10, Hyper-V] Unable to get host IP: ip for interface: Error finding IPV4 address for vEthernet #8152

Closed danielepo closed 4 years ago

danielepo commented 4 years ago

Steps to reproduce the issue:

  1. run command minikube start --driver=hyperv the command will error out Unable to get host IP: ip for interface

running the command minikube status will return

E0514 22:54:24.032593   13800 status.go:232] kubeconfig endpoint: extract IP: "minikube" does not appear in C:\Users\danpoz/.kube/config
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured

Full output of failed command:

W0514 22:56:44.581572   28940 root.go:252] Error reading config file at C:\Users\danpoz\.minikube\config\config.json: open C:\Users\danpoz\.minikube\config\config.json: Impossibile trovare i
l file specificato.
I0514 22:56:44.598567   28940 start.go:99] hostinfo: {"hostname":"POCLT147","uptime":10190,"bootTime":1589479614,"procs":325,"os":"windows","platform":"Microsoft Windows 10 Pro","platformFam
ily":"Standalone Workstation","platformVersion":"10.0.17134 Build 17134","kernelVersion":"","virtualizationSystem":"","virtualizationRole":"","hostid":"2c86cb0f-b597-49b5-a387-42d121557664"}

W0514 22:56:44.599530   28940 start.go:107] gopshost.Virtualization returned error: not implemented yet
* minikube v1.10.1 on Microsoft Windows 10 Pro 10.0.17134 Build 17134
I0514 22:56:44.606554   28940 notify.go:125] Checking for updates...
I0514 22:56:44.614534   28940 driver.go:253] Setting default libvirt URI to qemu:///system
* Using the hyperv driver based on existing profile
I0514 22:56:45.608638   28940 start.go:215] selected driver: hyperv
I0514 22:56:45.608638   28940 start.go:594] validating driver "hyperv" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/mini
kube-v1.10.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.10@sha256:f58e0c4662bac8a9b5dda7984b185bad8502ade5d9fa364bf2755d636ab51438 Memory:3900 CPUs:2 DiskSize:20000 Driver:hyperv Hype
rkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter
: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false
HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.2 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLo
adCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponent
s:map[apiserver:true system_pods:true]}
I0514 22:56:45.608638   28940 start.go:600] status for hyperv: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0514 22:56:45.609593   28940 iso.go:118] acquiring lock: {Name:mk94ac12a9ad34fbddc9048d341dab4a4669000a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
* Starting control plane node minikube in cluster minikube
I0514 22:56:45.612592   28940 preload.go:81] Checking if preload exists for k8s version v1.18.2 and runtime docker
I0514 22:56:45.613594   28940 preload.go:96] Found local preload: C:\Users\danpoz\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4
I0514 22:56:45.613594   28940 cache.go:48] Caching tarball of preloaded images
I0514 22:56:45.613594   28940 preload.go:122] Found C:\Users\danpoz\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v3-v1.18.2-docker-overlay2-amd64.tar.lz4 in cache, skipping downloa
d
I0514 22:56:45.613594   28940 cache.go:51] Finished verifying existence of preloaded tar for  v1.18.2 on docker
I0514 22:56:45.614593   28940 profile.go:156] Saving config to C:\Users\danpoz\.minikube\profiles\minikube\config.json ...
I0514 22:56:45.638598   28940 cache.go:132] Successfully downloaded all kic artifacts
I0514 22:56:45.638598   28940 start.go:223] acquiring machines lock for minikube: {Name:mk9c8e1546cf668ad97190466b723529094a4d54 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0514 22:56:45.640593   28940 start.go:227] acquired machines lock for "minikube" in 996µs
I0514 22:56:45.641595   28940 start.go:87] Skipping create...Using existing machine configuration
I0514 22:56:45.641595   28940 fix.go:53] fixHost starting:
I0514 22:56:45.643592   28940 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).stat
e
I0514 22:56:46.563421   28940 main.go:110] libmachine: [stdout =====>] : Running

I0514 22:56:46.563421   28940 main.go:110] libmachine: [stderr =====>] :
I0514 22:56:46.563421   28940 fix.go:105] recreateIfNeeded on minikube: state=Running err=<nil>
W0514 22:56:46.564388   28940 fix.go:131] unexpected machine state, will restart: <nil>
* Updating the running hyperv "minikube" VM ...
I0514 22:56:46.566392   28940 machine.go:86] provisioning docker machine ...
I0514 22:56:46.566392   28940 buildroot.go:163] provisioning hostname "minikube"
I0514 22:56:46.566392   28940 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).stat
e
I0514 22:56:47.480055   28940 main.go:110] libmachine: [stdout =====>] : Running

I0514 22:56:47.480055   28940 main.go:110] libmachine: [stderr =====>] :
I0514 22:56:47.480055   28940 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).net
workadapters[0]).ipaddresses[0]
I0514 22:56:48.786269   28940 main.go:110] libmachine: [stdout =====>] :
I0514 22:56:48.786269   28940 main.go:110] libmachine: [stderr =====>] :
I0514 22:56:48.788283   28940 machine.go:89] provisioned docker machine in 2.2218909s
I0514 22:56:48.792276   28940 fix.go:55] fixHost completed within 3.1506812s
I0514 22:56:48.792276   28940 start.go:74] releasing machines lock for "minikube", held for 3.1516836s
! StartHost failed, but will try again: provision: IP not found
I0514 22:56:53.794482   28940 start.go:223] acquiring machines lock for minikube: {Name:mk9c8e1546cf668ad97190466b723529094a4d54 Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0514 22:56:53.796550   28940 start.go:227] acquired machines lock for "minikube" in 2.0676ms
I0514 22:56:53.797549   28940 start.go:87] Skipping create...Using existing machine configuration
I0514 22:56:53.801601   28940 fix.go:53] fixHost starting:
I0514 22:56:53.806541   28940 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).stat
e
I0514 22:56:54.734036   28940 main.go:110] libmachine: [stdout =====>] : Running

I0514 22:56:54.734036   28940 main.go:110] libmachine: [stderr =====>] :
I0514 22:56:54.738055   28940 fix.go:105] recreateIfNeeded on minikube: state=Running err=<nil>
W0514 22:56:54.741034   28940 fix.go:131] unexpected machine state, will restart: <nil>
* Updating the running hyperv "minikube" VM ...
I0514 22:56:54.744034   28940 machine.go:86] provisioning docker machine ...
I0514 22:56:54.749037   28940 buildroot.go:163] provisioning hostname "minikube"
I0514 22:56:54.749037   28940 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).stat
e
I0514 22:56:55.663884   28940 main.go:110] libmachine: [stdout =====>] : Running

I0514 22:56:55.663884   28940 main.go:110] libmachine: [stderr =====>] :
I0514 22:56:55.665884   28940 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).net
workadapters[0]).ipaddresses[0]
I0514 22:56:56.821807   28940 main.go:110] libmachine: [stdout =====>] : 172.26.227.169

I0514 22:56:56.821807   28940 main.go:110] libmachine: [stderr =====>] :
I0514 22:56:56.839818   28940 main.go:110] libmachine: Using SSH client type: native
I0514 22:56:56.840810   28940 main.go:110] libmachine: &{{{<nil> 0 [] [] []} docker [0x7c0950] 0x7c0920 <nil>  [] 0s} 172.26.227.169 22 <nil> <nil>}
I0514 22:56:56.841970   28940 main.go:110] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0514 22:56:56.963742   28940 main.go:110] libmachine: SSH cmd err, output: <nil>: minikube

I0514 22:56:56.963742   28940 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).stat
e
I0514 22:56:57.879523   28940 main.go:110] libmachine: [stdout =====>] : Running

I0514 22:56:57.879523   28940 main.go:110] libmachine: [stderr =====>] :
I0514 22:56:57.881526   28940 main.go:110] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).net
workadapters[0]).ipaddresses[0]
I0514 22:56:59.026523   28940 main.go:110] libmachine: [stdout =====>] :
I0514 22:56:59.026523   28940 main.go:110] libmachine: [stderr =====>] :
I0514 22:56:59.031518   28940 machine.go:89] provisioned docker machine in 4.2824804s
I0514 22:56:59.032521   28940 fix.go:55] fixHost completed within 5.2309201s
I0514 22:56:59.033516   28940 start.go:74] releasing machines lock for "minikube", held for 5.2359664s
* Failed to start hyperv VM. "minikube start" may fix it: provision: IP not found
I0514 22:56:59.034517   28940 exit.go:58] WithError(error provisioning host)=Failed to start host: provision: IP not found called from:
goroutine 1 [running]:
runtime/debug.Stack(0x40acf1, 0x18d3660, 0x18b8300)
        /usr/local/go/src/runtime/debug/stack.go:24 +0xa4
k8s.io/minikube/pkg/minikube/exit.WithError(0x1b3f8de, 0x17, 0x1dfc340, 0xc0002de120)
        /app/pkg/minikube/exit/exit.go:58 +0x3b
k8s.io/minikube/cmd/minikube/cmd.runStart(0x2b53760, 0xc00047ee80, 0x0, 0x2)
        /app/cmd/minikube/cmd/start.go:170 +0xac9
github.com/spf13/cobra.(*Command).execute(0x2b53760, 0xc00047ee60, 0x2, 0x2, 0x2b53760, 0xc00047ee60)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x2b1
github.com/spf13/cobra.(*Command).ExecuteC(0x2b527a0, 0x0, 0x0, 0xc000118801)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x350
github.com/spf13/cobra.(*Command).Execute(...)
        /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
k8s.io/minikube/cmd/minikube/cmd.Execute()
        /app/cmd/minikube/cmd/root.go:112 +0x6f5
main.main()
        /app/cmd/minikube/main.go:66 +0xf1
W0514 22:56:59.035516   28940 out.go:201] error provisioning host: Failed to start host: provision: IP not found
*
X error provisioning host: Failed to start host: provision: IP not found
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
  - https://github.com/kubernetes/minikube/issues/new/choose

Optional: Full output of minikube logs command:

``` * ==> Docker <== * -- Logs begin at Thu 2020-05-14 12:28:00 UTC, end at Thu 2020-05-14 20:59:16 UTC. -- * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.669688108Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.669715008Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.669760809Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.669933812Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.669981413Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670066715Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/ daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670073815Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.19.107\n": exit status 1" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670079315Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/da emon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670136616Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670148016Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670168317Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670178817Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670186317Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670195517Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670205117Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670213818Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670221918Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670229918Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670278319Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670311719Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670730327Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670790529Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670834829Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670844130Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670852330Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670859530Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670866630Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670874630Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670882030Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670889430Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670896831Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670918731Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670927631Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670935231Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.670942831Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.671056034Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.671109935Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.671118835Z" level=info msg="containerd successfully booted in 0.004165s" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.675940526Z" level=info msg="parsed scheme: \"unix\"" module=grpc * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.675972927Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.675986027Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.675996627Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.676996246Z" level=info msg="parsed scheme: \"unix\"" module=grpc * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.677010247Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.677019447Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.677025847Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.706110599Z" level=warning msg="Your kernel does not support cgroup blkio weight" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.706148700Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.706157400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.706162600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.706168500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.706173300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.706399705Z" level=info msg="Loading containers: start." * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.791157114Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.828433021Z" level=info msg="Loading containers: done." * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.847681687Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.847741488Z" level=info msg="Daemon has completed initialization" * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.865168219Z" level=info msg="API listen on /var/run/docker.sock" * May 14 18:24:16 minikube systemd[1]: Started Docker Application Container Engine. * May 14 18:24:16 minikube dockerd[4820]: time="2020-05-14T18:24:16.865809631Z" level=info msg="API listen on [::]:2376" * * ==> container status <== * time="2020-05-14T20:59:18Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded" * CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES * * ==> describe nodes <== E0514 22:59:19.026898 12440 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1 stdout: stderr: error: stat /var/lib/minikube/kubeconfig: no such file or directory output: "\n** stderr ** \nerror: stat /var/lib/minikube/kubeconfig: no such file or directory\n\n** /stderr **" * * ==> dmesg <== * [May14 12:28] smpboot: 128 Processors exceeds NR_CPUS limit of 64 * [ +0.148134] You have booted with nomodeset. This means your GPU drivers are DISABLED * [ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly * [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it * [ +0.154576] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. * [ +0.019760] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug, * * this clock source is slow. Consider trying other clock sources * [ +2.933179] Unstable clock detected, switching default tracing clock to "global" * If you want to keep using the local clock, then add: * "trace_clock=local" * on the kernel command line * [ +0.000037] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 * [ +0.550766] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons * [ +1.267107] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument * [ +0.004209] systemd-fstab-generator[1287]: Ignoring "noauto" for root device * [ +0.006357] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based fir ewalling. * [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) * [May14 12:29] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. * [ +0.043661] vboxguest: loading out-of-tree module taints kernel. * [ +0.004297] vboxguest: PCI device not found, probably running on physical hardware. * [ +25.396570] systemd-fstab-generator[2423]: Ignoring "noauto" for root device * [ +0.105573] systemd-fstab-generator[2433]: Ignoring "noauto" for root device * [ +13.162102] systemd-fstab-generator[2728]: Ignoring "noauto" for root device * [May14 12:31] NFSD: Unable to end grace period: -110 * [May14 18:24] systemd-fstab-generator[4799]: Ignoring "noauto" for root device * [ +0.059596] systemd-fstab-generator[4809]: Ignoring "noauto" for root device * [ +1.163397] kauditd_printk_skb: 107 callbacks suppressed * [ +10.541090] systemd-fstab-generator[5074]: Ignoring "noauto" for root device * [May14 20:40] systemd-fstab-generator[6145]: Ignoring "noauto" for root device * [May14 20:46] systemd-fstab-generator[6703]: Ignoring "noauto" for root device * [May14 20:48] systemd-fstab-generator[6988]: Ignoring "noauto" for root device * [May14 20:53] systemd-fstab-generator[7443]: Ignoring "noauto" for root device * * ==> kernel <== * 20:59:19 up 8:30, 0 users, load average: 0.00, 0.00, 0.00 * Linux minikube 4.19.107 #1 SMP Mon May 11 14:51:04 PDT 2020 x86_64 GNU/Linux * PRETTY_NAME="Buildroot 2019.02.10" * * ==> kubelet <== * -- Logs begin at Thu 2020-05-14 12:28:00 UTC, end at Thu 2020-05-14 20:59:19 UTC. -- * -- No entries -- ! unable to fetch logs for: describe nodes ```
medyagh commented 4 years ago

@danielepo what version of minikube do u use?

danielepo commented 4 years ago

according to the logs minikube v1.10.1 running minikube version provides this information

minikube version: v1.10.1
commit: 63ab801ac27e5742ae442ce36dff7877dcccb278
medyagh commented 4 years ago

@danielepo do you happen to have done anyhting to your hyperv default swtich ? like deleting the default swtich ? I think the problem is u dont have a default switch.

meanwhile if that is true, minikube should do a better job detecting that and giving a better solution message. do you mind checking that?

Meanwhile have you tried out newest driver Docker Driver with latest version of minikube? you could try minikube delete minikube start --driver=docker

for more information on the docker driver checkout: https://minikube.sigs.k8s.io/docs/drivers/docker/

robrich commented 4 years ago

I get this error on 1.10.1 and 1.10.0 but not on 1.9.2. I'm running on Windows 10 Pro 1909.

The command I use is

minikube.exe start --vm-driver="hyperv" --hyperv-virtual-switch="minikube" --kubernetes-version=1.18.2

("minikube" is the name of the hyper-v switch that has external access.)

Output of the command from 1.10.0 and 1.10.1:

* minikube v1.10.1 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
  - MINIKUBE_ACTIVE_DOCKERD=minikube
* Using the hyperv driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Updating the running hyperv "minikube" VM ...
* Preparing Kubernetes v1.18.2 on Docker 19.03.5 ...
E0518 17:20:09.157958    2988 start.go:95] Unable to get host IP: ip for interface (minikube): Error finding IPV4 address for vEthernet (minikube)
*
X failed to start node: startup failed: Failed to setup kubeconfig: ip for interface (minikube): Error finding IPV4 address for vEthernet (minikube)
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
  - https://github.com/kubernetes/minikube/issues/new/choose

Output of the command from 1.9.2:

* minikube v1.9.2 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
  - MINIKUBE_ACTIVE_DOCKERD=minikube
* Using the hyperv driver based on existing profile
* Starting control plane node m01 in cluster minikube
* Restarting existing hyperv VM for "minikube" ...
* Preparing Kubernetes v1.18.2 on Docker 19.03.5 ...
* Enabling addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube"
medyagh commented 4 years ago

@robrich not sure whats the cause of this issue, but in newest versions of minikube we no longer need to specifcy the switch and it auto-detects, do you mind trying without that ?

@danielepo @robrich Meanwhile have you tried out newest driver Docker Driver with latest version of minikube? you could try minikube delete minikube start --driver=docker

for more information on the docker driver checkout: https://minikube.sigs.k8s.io/docs/drivers/docker/

I also recommend using the latest minikube (not relevent to your issue but we had some other improvements)

robrich commented 4 years ago

I did minikube delete, upgraded to 1.11.0, minikube start ..., and it started right up. It still wasn't able to correctly detect my switch, but restarting and passing the arg worked like a charm.

--driver=docker seems interesting but I'm not sure I understand the benefit. If I wanted a k8s cluster on Docker, I'd use Docker Desktop and turn on Kubernetes mode. What does minikube --driver=docker yield that Docker Desktop doesn't? (Aside from choosing my own k8s version.) What don't I understand yet?

andrewrotherham commented 4 years ago

Hi, Medyagh's advice does work, but I'd quite like to know why doing it with hyper-v doesn't work (when it did yesterday on the exact same machine). Also, with hyper-V, I had a handy way of seeing the status of my minikube vm, I can't see a GUI way to look at it in Docker....(I am new to this). Big thanks to Medyagh though , been tearing my hair out all day.

sammym1982 commented 4 years ago

@medyagh I ran into the same issue. In my case I am trying to start minikube from inside the Virtual Machine. Therefore my network interfaces names were as follows:

PS C:\WINDOWS\system32> netsh interface ipv4 show interface

Idx     Met         MTU          State                Name
---  ----------  ----------  ------------  ---------------------------
  1          75  4294967295  connected     Loopback Pseudo-Interface 1
 14          15        1500  connected     Ethernet 7
 16        5000        1500  connected     vEthernet (Default Switch) 2

As you can see the name is vEthernet (Default Switch) 2 and not vEthernet (Default Switch) which is what the implementation at https://github.com/kubernetes/minikube/blob/20179ef8ee3253043637132862970095132557df/pkg/minikube/cluster/ip.go#L56 tries to do.

I am not sure if there is better way to find the interface name but I think better approach than current implementation would be to do prefix match by looping over all interface names as done in stackoverflow link mentioned in https://github.com/kubernetes/minikube/blob/20179ef8ee3253043637132862970095132557df/pkg/minikube/cluster/ip.go#L134

The interface name cannot be renamed as the host uses the same name so its hard to match this hard coded pattern in code.

sharifelgamal commented 4 years ago

I agree that we should do a better job of finding the interface name. It should be as easy as calling net.Interfaces, looping through them and prefix matching on the names.

Help wanted!

sammym1982 commented 4 years ago

This is occuring with other folks in my team. I would have helped out with this but I have zero experience in 'go' to provide help with this quickly without ramping up. Looking forward to help on this.

Banzy666 commented 4 years ago

@sammym1982 I have Hyper-V switch with name "Wan" and local interface vEthernet (Default Switch). After rename vEthernet (Default Switch) to vEthernet (Wan) all works good +_+