kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.22k stars 4.87k forks source link

minikube host IP incorrect if not 192.168.64.1 - minikube mount fails #11510

Closed iamnoah closed 1 year ago

iamnoah commented 3 years ago

Steps to reproduce the issue:

  1. Have something ??? about the host machine that hyperkit will assign bridge100 to something other than 192.168.64.1
  2. minikube start --vm=true
  3. Noticed that the bridge100 adapter has address 192.168.168.1 in this case.
  4. Try minikube mount --alsologtostderr /tmp:/tmp (mount times out, logs below)

Full output of failed command:

``` I0526 05:47:14.533346  78273 out.go:291] Setting OutFile to fd 1 ... I0526 05:47:14.533733  78273 out.go:343] isatty.IsTerminal(1) = true I0526 05:47:14.533743  78273 out.go:304] Setting ErrFile to fd 2... I0526 05:47:14.533749  78273 out.go:343] isatty.IsTerminal(2) = true I0526 05:47:14.533903  78273 root.go:316] Updating PATH: /Users/ilker/.minikube/bin I0526 05:47:14.534480  78273 mustload.go:65] Loading cluster: minikube I0526 05:47:14.535857  78273 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:47:14.535928  78273 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:47:14.551822  78273 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:63641 I0526 05:47:14.552836  78273 main.go:128] libmachine: () Calling .GetVersion I0526 05:47:14.553729  78273 main.go:128] libmachine: Using API Version 1 I0526 05:47:14.553756  78273 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:47:14.554271  78273 main.go:128] libmachine: () Calling .GetMachineName I0526 05:47:14.554472  78273 main.go:128] libmachine: (minikube) Calling .GetState I0526 05:47:14.554686  78273 main.go:128] libmachine: (minikube) DBG | exe=/Users/ilker/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0526 05:47:14.554928  78273 main.go:128] libmachine: (minikube) DBG | hyperkit pid from json: 5264 I0526 05:47:14.557662  78273 host.go:66] Checking if "minikube" exists ... I0526 05:47:14.558425  78273 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:47:14.558497  78273 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:47:14.579238  78273 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:63647 I0526 05:47:14.579916  78273 main.go:128] libmachine: () Calling .GetVersion I0526 05:47:14.580454  78273 main.go:128] libmachine: Using API Version 1 I0526 05:47:14.580516  78273 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:47:14.581238  78273 main.go:128] libmachine: () Calling .GetMachineName I0526 05:47:14.581477  78273 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:47:14.581705  78273 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:47:14.583481  78273 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:47:14.598059  78273 out.go:170] :file_folder: Mounting host path /tmp into VM as /tmp ... :file_folder: Mounting host path /tmp into VM as /tmp ... I0526 05:47:14.622725  78273 out.go:170]   ▪ Mount type:     ▪ Mount type:   I0526 05:47:14.653287  78273 out.go:170]   ▪ User ID:   docker   ▪ User ID:   docker I0526 05:47:14.675983  78273 out.go:170]   ▪ Group ID:   docker   ▪ Group ID:   docker I0526 05:47:14.697206  78273 out.go:170]   ▪ Version:   9p2000.L   ▪ Version:   9p2000.L I0526 05:47:14.718222  78273 out.go:170]   ▪ Message Size: 262144   ▪ Message Size: 262144 I0526 05:47:14.731887  78273 out.go:170]   ▪ Permissions: 755 (-rwxr-xr-x)   ▪ Permissions: 755 (-rwxr-xr-x) I0526 05:47:14.757830  78273 out.go:170]   ▪ Options:   map[]   ▪ Options:   map[] I0526 05:47:14.806443  78273 out.go:170]   ▪ Bind Address: 192.168.64.1:63649   ▪ Bind Address: 192.168.64.1:63649 W0526 05:47:14.806711  78273 out.go:424] no arguments passed for ":rocket: Userspace file server: " - returning raw string I0526 05:47:14.807089  78273 ssh_runner.go:149] Run: /bin/bash -c "[ "x$(findmnt -T /tmp | grep /tmp)" != "x" ] && sudo umount -f /tmp || echo " I0526 05:47:14.832049  78273 out.go:170] :rocket: Userspace file server:  :rocket: Userspace file server: I0526 05:47:14.832218  78273 main.go:128] libmachine: (minikube) Calling .GetSSHHostname ufs starting I0526 05:47:14.832512  78273 main.go:114] stdlog: ufs.go:27 listen tcp 192.168.64.1:63649: bind: can't assign requested address W0526 05:47:14.832533  78273 out.go:424] no arguments passed for ":octagonal_sign: Userspace file server is shutdown\n" - returning raw string W0526 05:47:14.832577  78273 out.go:424] no arguments passed for ":octagonal_sign: Userspace file server is shutdown\n" - returning raw string I0526 05:47:14.832707  78273 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:47:14.852098  78273 out.go:170] :octagonal_sign: Userspace file server is shutdown :octagonal_sign: Userspace file server is shutdown I0526 05:47:14.852452  78273 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:47:14.852709  78273 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:47:14.853002  78273 sshutil.go:53] new ssh client: &{IP:192.168.168.20 Port:22 SSHKeyPath:/Users/ilker/.minikube/machines/minikube/id_rsa Username:docker} I0526 05:47:14.956385  78273 mount.go:147] unmount for /tmp ran successfully I0526 05:47:14.956422  78273 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -m 755 -p /tmp" I0526 05:47:14.975935  78273 ssh_runner.go:149] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=63649,trans=tcp,version=9p2000.L 192.168.64.1 /tmp" ```

Full output of minikube logs command:

``` * * ==> Audit <== * |---------|-----------------------------------------------------------------------------------------------|----------|-------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|-----------------------------------------------------------------------------------------------|----------|-------|---------|-------------------------------|-------------------------------| | ssh | grep host.minikube.internal | minikube | ilker | v1.20.0 | Sun, 23 May 2021 14:53:11 +03 | Sun, 23 May 2021 14:53:11 +03 | | | /etc/hosts | cut -f1 | | | | | | | ssh | grep host.minikube.internal | minikube | ilker | v1.20.0 | Sun, 23 May 2021 15:14:45 +03 | Sun, 23 May 2021 15:14:45 +03 | | | /etc/hosts | cut -f1 | | | | | | | start | | minikube | ilker | v1.20.0 | Mon, 24 May 2021 04:23:34 +03 | Mon, 24 May 2021 04:24:52 +03 | | delete | | minikube | ilker | v1.20.0 | Mon, 24 May 2021 11:10:30 +03 | Mon, 24 May 2021 11:10:54 +03 | | config | set disk-size 50GB | minikube | ilker | v1.20.0 | Mon, 24 May 2021 11:11:32 +03 | Mon, 24 May 2021 11:11:32 +03 | | start | --vm=true | minikube | ilker | v1.20.0 | Mon, 24 May 2021 11:11:42 +03 | Mon, 24 May 2021 11:12:45 +03 | | start | --vm=true | minikube | ilker | v1.20.0 | Mon, 24 May 2021 11:14:17 +03 | Mon, 24 May 2021 11:14:22 +03 | | addons | enable ingress-dns | minikube | ilker | v1.20.0 | Mon, 24 May 2021 11:14:24 +03 | Mon, 24 May 2021 11:14:24 +03 | | addons | enable ingress | minikube | ilker | v1.20.0 | Mon, 24 May 2021 11:14:29 +03 | Mon, 24 May 2021 11:16:23 +03 | | ip | | minikube | ilker | v1.20.0 | Mon, 24 May 2021 11:58:08 +03 | Mon, 24 May 2021 11:58:09 +03 | | addons | configure registry-creds | minikube | ilker | v1.20.0 | Mon, 24 May 2021 11:58:17 +03 | Mon, 24 May 2021 11:59:14 +03 | | addons | enable registry-creds | minikube | ilker | v1.20.0 | Mon, 24 May 2021 11:59:24 +03 | Mon, 24 May 2021 11:59:24 +03 | | ssh | grep host.minikube.internal | minikube | ilker | v1.20.0 | Mon, 24 May 2021 23:46:21 +03 | Mon, 24 May 2021 23:46:21 +03 | | | /etc/hosts | cut -f1 | | | | | | | ssh | grep host.minikube.internal | minikube | ilker | v1.20.0 | Mon, 24 May 2021 23:48:00 +03 | Mon, 24 May 2021 23:48:00 +03 | | | /etc/hosts | cut -f1 | | | | | | | delete | | minikube | ilker | v1.20.0 | Tue, 25 May 2021 01:14:58 +03 | Tue, 25 May 2021 01:15:21 +03 | | config | set disk-size 50GB | minikube | ilker | v1.20.0 | Tue, 25 May 2021 01:31:29 +03 | Tue, 25 May 2021 01:31:29 +03 | | start | --vm=true | minikube | ilker | v1.20.0 | Tue, 25 May 2021 01:32:16 +03 | Tue, 25 May 2021 01:33:18 +03 | | addons | enable ingress-dns | minikube | ilker | v1.20.0 | Tue, 25 May 2021 01:33:45 +03 | Tue, 25 May 2021 01:33:45 +03 | | addons | enable ingress | minikube | ilker | v1.20.0 | Tue, 25 May 2021 01:33:55 +03 | Tue, 25 May 2021 01:35:55 +03 | | ip | | minikube | ilker | v1.20.0 | Tue, 25 May 2021 01:36:30 +03 | Tue, 25 May 2021 01:36:30 +03 | | addons | configure registry-creds | minikube | ilker | v1.20.0 | Tue, 25 May 2021 01:36:44 +03 | Tue, 25 May 2021 01:37:17 +03 | | addons | enable registry-creds | minikube | ilker | v1.20.0 | Tue, 25 May 2021 01:37:27 +03 | Tue, 25 May 2021 01:37:27 +03 | | ssh | grep host.minikube.internal | minikube | ilker | v1.20.0 | Tue, 25 May 2021 01:40:04 +03 | Tue, 25 May 2021 01:40:04 +03 | | | /etc/hosts | cut -f1 | | | | | | | ssh | | minikube | ilker | v1.20.0 | Tue, 25 May 2021 02:02:10 +03 | Tue, 25 May 2021 02:02:24 +03 | | ssh | grep host.minikube.internal | minikube | ilker | v1.20.0 | Tue, 25 May 2021 02:14:40 +03 | Tue, 25 May 2021 02:14:40 +03 | | | /etc/hosts | cut -f1 | | | | | | | logs | | minikube | ilker | v1.20.0 | Tue, 25 May 2021 02:21:50 +03 | Tue, 25 May 2021 02:21:52 +03 | | start | --vm=true | minikube | ilker | v1.20.0 | Tue, 25 May 2021 02:24:03 +03 | Tue, 25 May 2021 02:24:11 +03 | | start | --vm=true | minikube | ilker | v1.20.0 | Tue, 25 May 2021 02:26:08 +03 | Tue, 25 May 2021 02:26:15 +03 | | delete | | minikube | ilker | v1.20.0 | Tue, 25 May 2021 03:25:38 +03 | Tue, 25 May 2021 03:25:54 +03 | | config | set disk-size 50GB | minikube | ilker | v1.20.0 | Tue, 25 May 2021 03:25:59 +03 | Tue, 25 May 2021 03:25:59 +03 | | start | --vm=true | minikube | ilker | v1.20.0 | Tue, 25 May 2021 03:26:05 +03 | Tue, 25 May 2021 03:27:03 +03 | | addons | enable ingress-dns | minikube | ilker | v1.20.0 | Tue, 25 May 2021 03:27:06 +03 | Tue, 25 May 2021 03:27:07 +03 | | addons | enable ingress | minikube | ilker | v1.20.0 | Tue, 25 May 2021 03:27:13 +03 | Tue, 25 May 2021 03:28:47 +03 | | addons | configure registry-creds | minikube | ilker | v1.20.0 | Tue, 25 May 2021 03:30:04 +03 | Tue, 25 May 2021 03:30:38 +03 | | addons | enable registry-creds | minikube | ilker | v1.20.0 | Tue, 25 May 2021 03:30:45 +03 | Tue, 25 May 2021 03:30:46 +03 | | ssh | grep host.minikube.internal | minikube | ilker | v1.20.0 | Tue, 25 May 2021 03:33:48 +03 | Tue, 25 May 2021 03:33:48 +03 | | | /etc/hosts | cut -f1 | | | | | | | delete | | minikube | ilker | v1.19.0 | Tue, 25 May 2021 03:42:29 +03 | Tue, 25 May 2021 03:42:49 +03 | | config | set disk-size 50GB | minikube | ilker | v1.19.0 | Tue, 25 May 2021 03:43:01 +03 | Tue, 25 May 2021 03:43:01 +03 | | start | --vm=true | minikube | ilker | v1.19.0 | Tue, 25 May 2021 03:43:11 +03 | Tue, 25 May 2021 03:45:46 +03 | | addons | enable ingress-dns | minikube | ilker | v1.19.0 | Tue, 25 May 2021 03:46:53 +03 | Tue, 25 May 2021 03:46:54 +03 | | addons | enable ingress | minikube | ilker | v1.19.0 | Tue, 25 May 2021 03:47:04 +03 | Tue, 25 May 2021 03:48:27 +03 | | addons | enable ingress | minikube | ilker | v1.19.0 | Tue, 25 May 2021 03:48:41 +03 | Tue, 25 May 2021 03:48:41 +03 | | ip | | minikube | ilker | v1.19.0 | Tue, 25 May 2021 03:48:54 +03 | Tue, 25 May 2021 03:48:54 +03 | | addons | configure registry-creds | minikube | ilker | v1.19.0 | Tue, 25 May 2021 03:49:07 +03 | Tue, 25 May 2021 03:50:05 +03 | | addons | enable registry-creds | minikube | ilker | v1.19.0 | Tue, 25 May 2021 03:51:46 +03 | Tue, 25 May 2021 03:51:46 +03 | | ssh | grep host.minikube.internal | minikube | ilker | v1.19.0 | Tue, 25 May 2021 03:53:51 +03 | Tue, 25 May 2021 03:53:52 +03 | | | /etc/hosts | cut -f1 | | | | | | | addons | configure registry-creds | minikube | ilker | v1.19.0 | Tue, 25 May 2021 03:54:47 +03 | Tue, 25 May 2021 03:55:41 +03 | | addons | enable registry-creds | minikube | ilker | v1.19.0 | Tue, 25 May 2021 03:55:45 +03 | Tue, 25 May 2021 03:55:45 +03 | | ssh | | minikube | ilker | v1.19.0 | Tue, 25 May 2021 04:11:05 +03 | Tue, 25 May 2021 04:13:15 +03 | | image | load --help | minikube | ilker | v1.19.0 | Tue, 25 May 2021 23:26:37 +03 | Tue, 25 May 2021 23:26:37 +03 | | image | load --pull | minikube | ilker | v1.20.0 | Tue, 25 May 2021 23:31:06 +03 | Tue, 25 May 2021 23:31:08 +03 | | | | | | start | --vm=true | minikube | ilker | v1.20.0 | Wed, 26 May 2021 05:00:25 +03 | Wed, 26 May 2021 05:00:37 +03 | | image | load --pull | minikube | ilker | v1.20.0 | Wed, 26 May 2021 05:06:03 +03 | Wed, 26 May 2021 05:06:04 +03 | | | | | | ssh | | minikube | ilker | v1.20.0 | Wed, 26 May 2021 05:10:00 +03 | Wed, 26 May 2021 05:11:48 +03 | | cache | add | minikube | ilker | v1.20.0 | Wed, 26 May 2021 05:18:13 +03 | Wed, 26 May 2021 05:18:21 +03 | | | | | | cache | add | minikube | ilker | v1.20.0 | Wed, 26 May 2021 05:21:31 +03 | Wed, 26 May 2021 05:21:32 +03 | | | | | | image | load --pull | minikube | ilker | v1.20.0 | Wed, 26 May 2021 05:21:36 +03 | Wed, 26 May 2021 05:21:37 +03 | | | | | | ssh | | minikube | ilker | v1.20.0 | Wed, 26 May 2021 05:22:52 +03 | Wed, 26 May 2021 05:24:44 +03 | | ssh | | minikube | ilker | v1.20.0 | Wed, 26 May 2021 05:27:24 +03 | Wed, 26 May 2021 05:29:29 +03 | | logs | | minikube | ilker | v1.20.0 | Wed, 26 May 2021 05:37:33 +03 | Wed, 26 May 2021 05:37:37 +03 | |---------|-----------------------------------------------------------------------------------------------|----------|-------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2021/05/26 05:00:25 Running on machine: ilker-mac Binary: Built with gc go1.16.3 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0526 05:00:25.358030 72217 out.go:291] Setting OutFile to fd 1 ... I0526 05:00:25.359420 72217 out.go:343] isatty.IsTerminal(1) = true I0526 05:00:25.359438 72217 out.go:304] Setting ErrFile to fd 2... I0526 05:00:25.359450 72217 out.go:343] isatty.IsTerminal(2) = true I0526 05:00:25.359675 72217 root.go:316] Updating PATH: /Users/ilker/.minikube/bin I0526 05:00:25.363439 72217 out.go:298] Setting JSON to false I0526 05:00:25.448186 72217 start.go:108] hostinfo: {"hostname":"ilker-mac.local","uptime":92574,"bootTime":1621901851,"procs":561,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"11.5","kernelVersion":"20.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"3893cd5d-e480-3097-a4f4-3f1a6d146ae4"} W0526 05:00:25.448421 72217 start.go:116] gopshost.Virtualization returned error: not implemented yet I0526 05:00:25.742086 72217 out.go:170] 😄 minikube v1.20.0 on Darwin 11.5 I0526 05:00:25.744256 72217 notify.go:169] Checking for updates... I0526 05:00:25.752196 72217 driver.go:322] Setting default libvirt URI to qemu:///system I0526 05:00:25.753743 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:25.761444 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:25.926670 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62754 I0526 05:00:25.930137 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:25.932555 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:25.932579 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:25.933769 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:25.934252 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:26.000013 72217 out.go:170] ✨ Using the hyperkit driver based on existing profile I0526 05:00:26.000088 72217 start.go:276] selected driver: hyperkit I0526 05:00:26.000129 72217 start.go:718] validating driver "hyperkit" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.19.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:4000 CPUs:2 DiskSize:51200 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.168.20 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true ingress:true ingress-dns:true registry-creds:true storage-provisioner:true] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0526 05:00:26.000334 72217 start.go:729] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error: Reason: Fix: Doc:} I0526 05:00:26.001787 72217 install.go:51] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0526 05:00:26.013674 72217 install.go:116] Validating docker-machine-driver-hyperkit, PATH=/Users/ilker/.minikube/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Applications/VMware Fusion.app/Contents/Public:/opt/X11/bin:/Library/Apple/usr/bin:/Library/Frameworks/Mono.framework/Versions/Current/Commands:/Applications/Wireshark.app/Contents/MacOS:/Applications/Postgres.app/Contents/Versions/latest/bin:/Users/ilker/.rvm/bin I0526 05:00:26.027999 72217 install.go:136] /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit version is 1.20.0 I0526 05:00:26.082074 72217 install.go:78] stdout: /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:26.082114 72217 install.go:80] /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit looks good I0526 05:00:26.091928 72217 cni.go:93] Creating CNI manager for "" I0526 05:00:26.091959 72217 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0526 05:00:26.091991 72217 start_flags.go:273] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.19.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:4000 CPUs:2 DiskSize:51200 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.168.20 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true ingress:true ingress-dns:true registry-creds:true storage-provisioner:true] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0526 05:00:26.110019 72217 iso.go:123] acquiring lock: {Name:mk65dc57f47d8125c733701c77936c79aa5620db Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0526 05:00:26.225176 72217 out.go:170] 👍 Starting control plane node minikube in cluster minikube I0526 05:00:26.225243 72217 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker I0526 05:00:26.225847 72217 preload.go:106] Found local preload: /Users/ilker/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0526 05:00:26.225871 72217 cache.go:54] Caching tarball of preloaded images I0526 05:00:26.225911 72217 preload.go:132] Found /Users/ilker/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0526 05:00:26.225916 72217 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker I0526 05:00:26.228132 72217 profile.go:148] Saving config to /Users/ilker/.minikube/profiles/minikube/config.json ... I0526 05:00:26.230103 72217 cache.go:194] Successfully downloaded all kic artifacts I0526 05:00:26.232291 72217 start.go:313] acquiring machines lock for minikube: {Name:mkb0c97e32acff68a4f5040674edfac8793d79dc Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0526 05:00:26.234443 72217 start.go:317] acquired machines lock for "minikube" in 1.953634ms I0526 05:00:26.234507 72217 start.go:93] Skipping create...Using existing machine configuration I0526 05:00:26.234530 72217 fix.go:55] fixHost starting: I0526 05:00:26.235354 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:26.235436 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:26.258107 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62761 I0526 05:00:26.259089 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:26.260146 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:26.260167 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:26.260877 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:26.261170 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:26.261882 72217 main.go:128] libmachine: (minikube) Calling .GetState I0526 05:00:26.262220 72217 main.go:128] libmachine: (minikube) DBG | exe=/Users/ilker/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0526 05:00:26.263568 72217 main.go:128] libmachine: (minikube) DBG | hyperkit pid from json: 5264 I0526 05:00:26.267178 72217 fix.go:108] recreateIfNeeded on minikube: state=Running err= W0526 05:00:26.267247 72217 fix.go:134] unexpected machine state, will restart: I0526 05:00:26.291130 72217 out.go:170] 🏃 Updating the running hyperkit "minikube" VM ... I0526 05:00:26.291192 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:26.291449 72217 machine.go:88] provisioning docker machine ... I0526 05:00:26.291467 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:26.291678 72217 main.go:128] libmachine: (minikube) Calling .GetMachineName I0526 05:00:26.291836 72217 buildroot.go:166] provisioning hostname "minikube" I0526 05:00:26.291847 72217 main.go:128] libmachine: (minikube) Calling .GetMachineName I0526 05:00:26.292044 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:26.292219 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:26.292362 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:26.292520 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:26.292683 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:26.294175 72217 main.go:128] libmachine: Using SSH client type: native I0526 05:00:26.305366 72217 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x4401e00] 0x4401dc0 [] 0s} 192.168.168.20 22 } I0526 05:00:26.305407 72217 main.go:128] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0526 05:00:26.496017 72217 main.go:128] libmachine: SSH cmd err, output: : minikube I0526 05:00:26.496052 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:26.496436 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:26.496647 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:26.496876 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:26.497150 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:26.497457 72217 main.go:128] libmachine: Using SSH client type: native I0526 05:00:26.497712 72217 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x4401e00] 0x4401dc0 [] 0s} 192.168.168.20 22 } I0526 05:00:26.497734 72217 main.go:128] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0526 05:00:26.648547 72217 main.go:128] libmachine: SSH cmd err, output: : I0526 05:00:26.648585 72217 buildroot.go:172] set auth options {CertDir:/Users/ilker/.minikube CaCertPath:/Users/ilker/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/ilker/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/ilker/.minikube/machines/server.pem ServerKeyPath:/Users/ilker/.minikube/machines/server-key.pem ClientKeyPath:/Users/ilker/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/ilker/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/ilker/.minikube} I0526 05:00:26.648619 72217 buildroot.go:174] setting up certificates I0526 05:00:26.648640 72217 provision.go:83] configureAuth start I0526 05:00:26.648654 72217 main.go:128] libmachine: (minikube) Calling .GetMachineName I0526 05:00:26.648901 72217 main.go:128] libmachine: (minikube) Calling .GetIP I0526 05:00:26.649087 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:26.649242 72217 provision.go:137] copyHostCerts I0526 05:00:26.650004 72217 exec_runner.go:145] found /Users/ilker/.minikube/ca.pem, removing ... I0526 05:00:26.650029 72217 exec_runner.go:190] rm: /Users/ilker/.minikube/ca.pem I0526 05:00:26.650538 72217 exec_runner.go:152] cp: /Users/ilker/.minikube/certs/ca.pem --> /Users/ilker/.minikube/ca.pem (1074 bytes) I0526 05:00:26.651199 72217 exec_runner.go:145] found /Users/ilker/.minikube/cert.pem, removing ... I0526 05:00:26.651215 72217 exec_runner.go:190] rm: /Users/ilker/.minikube/cert.pem I0526 05:00:26.651428 72217 exec_runner.go:152] cp: /Users/ilker/.minikube/certs/cert.pem --> /Users/ilker/.minikube/cert.pem (1119 bytes) I0526 05:00:26.652243 72217 exec_runner.go:145] found /Users/ilker/.minikube/key.pem, removing ... I0526 05:00:26.652251 72217 exec_runner.go:190] rm: /Users/ilker/.minikube/key.pem I0526 05:00:26.652410 72217 exec_runner.go:152] cp: /Users/ilker/.minikube/certs/key.pem --> /Users/ilker/.minikube/key.pem (1679 bytes) I0526 05:00:26.652861 72217 provision.go:111] generating server cert: /Users/ilker/.minikube/machines/server.pem ca-key=/Users/ilker/.minikube/certs/ca.pem private-key=/Users/ilker/.minikube/certs/ca-key.pem org=ilker.minikube san=[192.168.168.20 192.168.168.20 localhost 127.0.0.1 minikube minikube] I0526 05:00:26.989703 72217 provision.go:165] copyRemoteCerts I0526 05:00:26.991260 72217 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0526 05:00:26.991316 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:26.991555 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:26.991693 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:26.991845 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:26.991969 72217 sshutil.go:53] new ssh client: &{IP:192.168.168.20 Port:22 SSHKeyPath:/Users/ilker/.minikube/machines/minikube/id_rsa Username:docker} I0526 05:00:27.045610 72217 ssh_runner.go:316] scp /Users/ilker/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes) I0526 05:00:27.072267 72217 ssh_runner.go:316] scp /Users/ilker/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes) I0526 05:00:27.096882 72217 ssh_runner.go:316] scp /Users/ilker/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0526 05:00:27.122884 72217 provision.go:86] duration metric: configureAuth took 474.221342ms I0526 05:00:27.122906 72217 buildroot.go:189] setting minikube options for container-runtime I0526 05:00:27.123671 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:27.124821 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:27.125094 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:27.125301 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:27.125439 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:27.125573 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:27.125806 72217 main.go:128] libmachine: Using SSH client type: native I0526 05:00:27.125984 72217 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x4401e00] 0x4401dc0 [] 0s} 192.168.168.20 22 } I0526 05:00:27.125995 72217 main.go:128] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0526 05:00:27.270765 72217 main.go:128] libmachine: SSH cmd err, output: : tmpfs I0526 05:00:27.270778 72217 buildroot.go:70] root file system type: tmpfs I0526 05:00:27.271124 72217 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ... I0526 05:00:27.271167 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:27.271426 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:27.271635 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:27.271811 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:27.272032 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:27.272376 72217 main.go:128] libmachine: Using SSH client type: native I0526 05:00:27.272581 72217 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x4401e00] 0x4401dc0 [] 0s} 192.168.168.20 22 } I0526 05:00:27.272667 72217 main.go:128] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0526 05:00:27.420968 72217 main.go:128] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0526 05:00:27.421036 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:27.421318 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:27.421508 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:27.421664 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:27.421920 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:27.422223 72217 main.go:128] libmachine: Using SSH client type: native I0526 05:00:27.422426 72217 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x4401e00] 0x4401dc0 [] 0s} 192.168.168.20 22 } I0526 05:00:27.422449 72217 main.go:128] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0526 05:00:27.570877 72217 main.go:128] libmachine: SSH cmd err, output: : I0526 05:00:27.570911 72217 machine.go:91] provisioned docker machine in 1.279442867s I0526 05:00:27.570929 72217 start.go:267] post-start starting for "minikube" (driver="hyperkit") I0526 05:00:27.570935 72217 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0526 05:00:27.570954 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:27.571386 72217 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0526 05:00:27.571406 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:27.571665 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:27.571860 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:27.572012 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:27.572171 72217 sshutil.go:53] new ssh client: &{IP:192.168.168.20 Port:22 SSHKeyPath:/Users/ilker/.minikube/machines/minikube/id_rsa Username:docker} I0526 05:00:27.664246 72217 ssh_runner.go:149] Run: cat /etc/os-release I0526 05:00:27.673807 72217 info.go:137] Remote host: Buildroot 2020.02.10 I0526 05:00:27.673830 72217 filesync.go:118] Scanning /Users/ilker/.minikube/addons for local assets ... I0526 05:00:27.674061 72217 filesync.go:118] Scanning /Users/ilker/.minikube/files for local assets ... I0526 05:00:27.674149 72217 start.go:270] post-start completed in 103.211501ms I0526 05:00:27.674173 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:27.674424 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:27.674569 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:27.674729 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:27.674864 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:27.675016 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:27.675226 72217 main.go:128] libmachine: Using SSH client type: native I0526 05:00:27.675469 72217 main.go:128] libmachine: &{{{ 0 [] [] []} docker [0x4401e00] 0x4401dc0 [] 0s} 192.168.168.20 22 } I0526 05:00:27.675483 72217 main.go:128] libmachine: About to run SSH command: date +%!s(MISSING).%!N(MISSING) I0526 05:00:27.781756 72217 main.go:128] libmachine: SSH cmd err, output: : 1621994428.818910847 I0526 05:00:27.781767 72217 fix.go:212] guest clock: 1621994428.818910847 I0526 05:00:27.781773 72217 fix.go:225] Guest: 2021-05-26 05:00:28.818910847 +0300 +03 Remote: 2021-05-26 05:00:27.674408 +0300 +03 m=+2.622241633 (delta=1.144502847s) I0526 05:00:27.781816 72217 fix.go:196] guest clock delta is within tolerance: 1.144502847s I0526 05:00:27.781820 72217 fix.go:57] fixHost completed within 1.547297899s I0526 05:00:27.781824 72217 start.go:80] releasing machines lock for "minikube", held for 1.547358771s I0526 05:00:27.781853 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:27.782085 72217 main.go:128] libmachine: (minikube) Calling .GetIP I0526 05:00:27.782235 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:27.782368 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:27.783124 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:27.783685 72217 ssh_runner.go:149] Run: systemctl --version I0526 05:00:27.783706 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:27.784016 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:27.784242 72217 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/ I0526 05:00:27.784329 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:27.784502 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:27.784703 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:27.784715 72217 sshutil.go:53] new ssh client: &{IP:192.168.168.20 Port:22 SSHKeyPath:/Users/ilker/.minikube/machines/minikube/id_rsa Username:docker} I0526 05:00:27.784991 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:27.785207 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:27.785370 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:27.785543 72217 sshutil.go:53] new ssh client: &{IP:192.168.168.20 Port:22 SSHKeyPath:/Users/ilker/.minikube/machines/minikube/id_rsa Username:docker} I0526 05:00:27.857695 72217 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker I0526 05:00:27.857766 72217 preload.go:106] Found local preload: /Users/ilker/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0526 05:00:27.857912 72217 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0526 05:00:28.788523 72217 docker.go:528] Got preloaded images: -- stdout -- quay.io/jetstack/cert-manager-cainjector:v1.3.1 quay.io/jetstack/cert-manager-controller:v1.3.1 quay.io/jetstack/cert-manager-webhook:v1.3.1 curlimages/curl:latest gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/ingress-nginx/controller: k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 kubernetesui/dashboard:v2.1.0 jettech/kube-webhook-certgen: cryptexlabs/minikube-ingress-dns: k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 upmcenterprises/registry-creds: -- /stdout -- I0526 05:00:28.788539 72217 ssh_runner.go:189] Completed: curl -sS -m 2 https://k8s.gcr.io/: (1.004272204s) I0526 05:00:28.788552 72217 docker.go:465] Images already preloaded, skipping extraction I0526 05:00:28.788706 72217 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd I0526 05:00:28.807090 72217 ssh_runner.go:149] Run: sudo systemctl cat docker.service I0526 05:00:28.824779 72217 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd I0526 05:00:28.850918 72217 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio I0526 05:00:28.867358 72217 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0526 05:00:28.887188 72217 ssh_runner.go:149] Run: sudo systemctl unmask docker.service I0526 05:00:29.732215 72217 ssh_runner.go:149] Run: sudo systemctl enable docker.socket I0526 05:00:30.294579 72217 ssh_runner.go:149] Run: sudo systemctl daemon-reload I0526 05:00:30.960127 72217 ssh_runner.go:149] Run: sudo systemctl start docker I0526 05:00:31.004691 72217 ssh_runner.go:149] Run: docker version --format {{.Server.Version}} I0526 05:00:31.115912 72217 out.go:197] 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.4 ... I0526 05:00:31.117156 72217 ssh_runner.go:149] Run: grep 192.168.64.1 host.minikube.internal$ /etc/hosts I0526 05:00:31.123286 72217 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime docker I0526 05:00:31.123324 72217 preload.go:106] Found local preload: /Users/ilker/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 I0526 05:00:31.123420 72217 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0526 05:00:31.196744 72217 docker.go:528] Got preloaded images: -- stdout -- quay.io/jetstack/cert-manager-cainjector:v1.3.1 quay.io/jetstack/cert-manager-webhook:v1.3.1 quay.io/jetstack/cert-manager-controller:v1.3.1 curlimages/curl:latest gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/ingress-nginx/controller: k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 kubernetesui/dashboard:v2.1.0 jettech/kube-webhook-certgen: cryptexlabs/minikube-ingress-dns: k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 upmcenterprises/registry-creds: -- /stdout -- I0526 05:00:31.196760 72217 docker.go:465] Images already preloaded, skipping extraction I0526 05:00:31.201787 72217 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}} I0526 05:00:31.264190 72217 docker.go:528] Got preloaded images: -- stdout -- quay.io/jetstack/cert-manager-cainjector:v1.3.1 quay.io/jetstack/cert-manager-webhook:v1.3.1 quay.io/jetstack/cert-manager-controller:v1.3.1 curlimages/curl:latest gcr.io/k8s-minikube/storage-provisioner:v5 k8s.gcr.io/ingress-nginx/controller: k8s.gcr.io/kube-proxy:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2 kubernetesui/dashboard:v2.1.0 jettech/kube-webhook-certgen: cryptexlabs/minikube-ingress-dns: k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 upmcenterprises/registry-creds: -- /stdout -- I0526 05:00:31.264220 72217 cache_images.go:74] Images are preloaded, skipping loading I0526 05:00:31.264407 72217 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}} I0526 05:00:31.334201 72217 cni.go:93] Creating CNI manager for "" I0526 05:00:31.334213 72217 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0526 05:00:31.334228 72217 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0526 05:00:31.334242 72217 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.168.20 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.168.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.168.20 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0526 05:00:31.334363 72217 kubeadm.go:157] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.168.20 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.168.20 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.168.20"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.20.2 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: systemd clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 I0526 05:00:31.334455 72217 kubeadm.go:901] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.168.20 [Install] config: {KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0526 05:00:31.334561 72217 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0526 05:00:31.346967 72217 binaries.go:44] Found k8s binaries, skipping transfer I0526 05:00:31.347093 72217 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0526 05:00:31.358312 72217 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (336 bytes) I0526 05:00:31.378652 72217 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0526 05:00:31.401722 72217 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1845 bytes) I0526 05:00:31.432838 72217 ssh_runner.go:149] Run: grep 192.168.168.20 control-plane.minikube.internal$ /etc/hosts I0526 05:00:31.440868 72217 certs.go:52] Setting up /Users/ilker/.minikube/profiles/minikube for IP: 192.168.168.20 I0526 05:00:31.445231 72217 certs.go:171] skipping minikubeCA CA generation: /Users/ilker/.minikube/ca.key I0526 05:00:31.448562 72217 certs.go:171] skipping proxyClientCA CA generation: /Users/ilker/.minikube/proxy-client-ca.key I0526 05:00:31.448694 72217 certs.go:282] skipping minikube-user signed cert generation: /Users/ilker/.minikube/profiles/minikube/client.key I0526 05:00:31.449106 72217 certs.go:282] skipping minikube signed cert generation: /Users/ilker/.minikube/profiles/minikube/apiserver.key.eddaf37c I0526 05:00:31.449216 72217 certs.go:282] skipping aggregator signed cert generation: /Users/ilker/.minikube/profiles/minikube/proxy-client.key I0526 05:00:31.449928 72217 certs.go:361] found cert: /Users/ilker/.minikube/certs/Users/ilker/.minikube/certs/ca-key.pem (1675 bytes) I0526 05:00:31.450014 72217 certs.go:361] found cert: /Users/ilker/.minikube/certs/Users/ilker/.minikube/certs/ca.pem (1074 bytes) I0526 05:00:31.450071 72217 certs.go:361] found cert: /Users/ilker/.minikube/certs/Users/ilker/.minikube/certs/cert.pem (1119 bytes) I0526 05:00:31.450125 72217 certs.go:361] found cert: /Users/ilker/.minikube/certs/Users/ilker/.minikube/certs/key.pem (1679 bytes) I0526 05:00:31.454096 72217 ssh_runner.go:316] scp /Users/ilker/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0526 05:00:31.492872 72217 ssh_runner.go:316] scp /Users/ilker/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0526 05:00:31.528307 72217 ssh_runner.go:316] scp /Users/ilker/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0526 05:00:31.570666 72217 ssh_runner.go:316] scp /Users/ilker/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0526 05:00:31.600282 72217 ssh_runner.go:316] scp /Users/ilker/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0526 05:00:31.639063 72217 ssh_runner.go:316] scp /Users/ilker/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0526 05:00:31.675919 72217 ssh_runner.go:316] scp /Users/ilker/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0526 05:00:31.710514 72217 ssh_runner.go:316] scp /Users/ilker/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0526 05:00:31.744100 72217 ssh_runner.go:316] scp /Users/ilker/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0526 05:00:31.785273 72217 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0526 05:00:31.825770 72217 ssh_runner.go:149] Run: openssl version I0526 05:00:31.847882 72217 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0526 05:00:31.874366 72217 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0526 05:00:31.893331 72217 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 May 11 20:03 /usr/share/ca-certificates/minikubeCA.pem I0526 05:00:31.893451 72217 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0526 05:00:31.916078 72217 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0526 05:00:31.953912 72217 kubeadm.go:381] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:4000 CPUs:2 DiskSize:51200 VMDriver: Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.168.20 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true ingress:true ingress-dns:true registry-creds:true storage-provisioner:true] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} I0526 05:00:31.954195 72217 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0526 05:00:32.161406 72217 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0526 05:00:32.181891 72217 kubeadm.go:392] found existing configuration files, will attempt cluster restart I0526 05:00:32.182338 72217 kubeadm.go:591] restartCluster start I0526 05:00:32.182585 72217 ssh_runner.go:149] Run: sudo test -d /data/minikube I0526 05:00:32.197646 72217 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0526 05:00:32.198871 72217 kubeconfig.go:93] found "minikube" server: "https://192.168.168.20:8443" I0526 05:00:32.209682 72217 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0526 05:00:32.220512 72217 api_server.go:148] Checking apiserver status ... I0526 05:00:32.220641 72217 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0526 05:00:32.237737 72217 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/3303/cgroup I0526 05:00:32.245248 72217 api_server.go:164] apiserver freezer: "4:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb46a791ad0eb709bce45bd184436d84.slice/docker-38ac5e95b550230471b71517911a2c68be6d07296ff689b864c382d946c1d4cd.scope" I0526 05:00:32.245465 72217 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb46a791ad0eb709bce45bd184436d84.slice/docker-38ac5e95b550230471b71517911a2c68be6d07296ff689b864c382d946c1d4cd.scope/freezer.state I0526 05:00:32.254644 72217 api_server.go:186] freezer state: "THAWED" I0526 05:00:32.254666 72217 api_server.go:223] Checking apiserver healthz at https://192.168.168.20:8443/healthz ... I0526 05:00:32.266689 72217 api_server.go:249] https://192.168.168.20:8443/healthz returned 200: ok I0526 05:00:32.300766 72217 system_pods.go:86] 9 kube-system pods found I0526 05:00:32.300792 72217 system_pods.go:89] "coredns-74ff55c5b-58595" [85330fb3-6f1e-4173-b959-582825e2867c] Running I0526 05:00:32.300797 72217 system_pods.go:89] "etcd-minikube" [b90eb0f9-b3fb-4a3b-ae68-345381f70fe7] Running I0526 05:00:32.300801 72217 system_pods.go:89] "kube-apiserver-minikube" [283d8e30-d582-4b5b-9b1b-8dbf8c77e28b] Running I0526 05:00:32.300804 72217 system_pods.go:89] "kube-controller-manager-minikube" [09c021dd-3fe4-4e09-b2b1-7708da59811d] Running I0526 05:00:32.300807 72217 system_pods.go:89] "kube-ingress-dns-minikube" [cb8ada5c-5da0-435c-9df6-77b779640382] Running I0526 05:00:32.300810 72217 system_pods.go:89] "kube-proxy-cgnzx" [af7732fb-4085-4d7f-bc17-af242ab53b3d] Running I0526 05:00:32.300813 72217 system_pods.go:89] "kube-scheduler-minikube" [e0c8c19b-d079-45dc-a29a-9f87a175c070] Running I0526 05:00:32.300816 72217 system_pods.go:89] "registry-creds-85b974c7d7-x25t5" [b411efc7-7721-4888-a6e5-c8ced3268fd0] Running I0526 05:00:32.300819 72217 system_pods.go:89] "storage-provisioner" [b2c20081-b396-4f71-8bb6-3aa583fda7cf] Running I0526 05:00:32.304099 72217 api_server.go:139] control plane version: v1.20.2 I0526 05:00:32.304110 72217 kubeadm.go:585] The running cluster does not require reconfiguration: 192.168.168.20 I0526 05:00:32.304120 72217 kubeadm.go:638] Taking a shortcut, as the cluster seems to be properly configured I0526 05:00:32.304123 72217 kubeadm.go:595] restartCluster took 121.775109ms I0526 05:00:32.304126 72217 kubeadm.go:383] StartCluster complete in 350.230523ms I0526 05:00:32.304135 72217 settings.go:142] acquiring lock: {Name:mk2e63e9b2013908969dfe7551ffde7b1dcc1923 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0526 05:00:32.304545 72217 settings.go:150] Updating kubeconfig: /Users/ilker/.kube/config I0526 05:00:32.305082 72217 lock.go:36] WriteFile acquiring /Users/ilker/.kube/config: {Name:mk72bdabe89758b9dcd2e4370c4a2fc0b88c84ed Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0526 05:00:32.317101 72217 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I0526 05:00:32.317138 72217 start.go:201] Will wait 6m0s for node &{Name: IP:192.168.168.20 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} W0526 05:00:32.317166 72217 out.go:424] no arguments passed for "🔎 Verifying Kubernetes components...\n" - returning raw string W0526 05:00:32.317177 72217 out.go:424] no arguments passed for "🔎 Verifying Kubernetes components...\n" - returning raw string I0526 05:00:32.326750 72217 out.go:170] 🔎 Verifying Kubernetes components... I0526 05:00:32.317192 72217 addons.go:328] enableAddons start: toEnable=map[dashboard:true default-storageclass:true ingress:true ingress-dns:true registry-creds:true storage-provisioner:true], additional=[] I0526 05:00:32.326860 72217 addons.go:55] Setting default-storageclass=true in profile "minikube" I0526 05:00:32.326860 72217 addons.go:55] Setting storage-provisioner=true in profile "minikube" I0526 05:00:32.326871 72217 addons.go:55] Setting ingress-dns=true in profile "minikube" I0526 05:00:32.326879 72217 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0526 05:00:32.326952 72217 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet I0526 05:00:32.326980 72217 addons.go:55] Setting ingress=true in profile "minikube" I0526 05:00:32.326996 72217 addons.go:55] Setting dashboard=true in profile "minikube" I0526 05:00:32.327037 72217 addons.go:55] Setting registry-creds=true in profile "minikube" I0526 05:00:32.327408 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.327438 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.330251 72217 addons.go:131] Setting addon storage-provisioner=true in "minikube" I0526 05:00:32.330415 72217 addons.go:131] Setting addon ingress=true in "minikube" W0526 05:00:32.330421 72217 addons.go:140] addon storage-provisioner should already be in state true W0526 05:00:32.330564 72217 addons.go:140] addon ingress should already be in state true I0526 05:00:32.330583 72217 addons.go:131] Setting addon dashboard=true in "minikube" I0526 05:00:32.330658 72217 addons.go:131] Setting addon ingress-dns=true in "minikube" I0526 05:00:32.330702 72217 host.go:66] Checking if "minikube" exists ... W0526 05:00:32.330712 72217 addons.go:140] addon ingress-dns should already be in state true I0526 05:00:32.330740 72217 host.go:66] Checking if "minikube" exists ... I0526 05:00:32.330789 72217 addons.go:131] Setting addon registry-creds=true in "minikube" I0526 05:00:32.330841 72217 host.go:66] Checking if "minikube" exists ... W0526 05:00:32.330855 72217 addons.go:140] addon registry-creds should already be in state true I0526 05:00:32.330954 72217 host.go:66] Checking if "minikube" exists ... W0526 05:00:32.331072 72217 addons.go:140] addon dashboard should already be in state true I0526 05:00:32.331222 72217 host.go:66] Checking if "minikube" exists ... I0526 05:00:32.337177 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.337241 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.337271 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.337338 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.337468 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.337470 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.337571 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.337671 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.337721 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.337736 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.358702 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62788 I0526 05:00:32.358785 72217 api_server.go:50] waiting for apiserver process to appear ... I0526 05:00:32.358944 72217 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0526 05:00:32.363183 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62792 I0526 05:00:32.363826 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.367483 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.367574 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.369745 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.370615 72217 main.go:128] libmachine: (minikube) Calling .GetState I0526 05:00:32.370966 72217 main.go:128] libmachine: (minikube) DBG | exe=/Users/ilker/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0526 05:00:32.371059 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.371222 72217 main.go:128] libmachine: (minikube) DBG | hyperkit pid from json: 5264 I0526 05:00:32.372214 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.372270 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.373462 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.375091 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.375204 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.378793 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62796 I0526 05:00:32.378869 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62801 I0526 05:00:32.379337 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62802 I0526 05:00:32.382783 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.384019 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.384040 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.384147 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.384234 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.385047 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.385358 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.385371 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.385710 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.385723 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.386241 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.386285 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.386312 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.389734 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.390267 72217 api_server.go:70] duration metric: took 73.100574ms to wait for apiserver process to appear ... I0526 05:00:32.390303 72217 api_server.go:86] waiting for apiserver healthz status ... I0526 05:00:32.390334 72217 api_server.go:223] Checking apiserver healthz at https://192.168.168.20:8443/healthz ... I0526 05:00:32.390587 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.390750 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.395630 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62809 I0526 05:00:32.396878 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.396948 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.401270 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.402989 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.403055 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.405083 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.405722 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62813 I0526 05:00:32.406599 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.406719 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.407065 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.409947 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.410078 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.411159 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.414611 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62817 I0526 05:00:32.414909 72217 main.go:128] libmachine: (minikube) Calling .GetState I0526 05:00:32.415889 72217 main.go:128] libmachine: (minikube) DBG | exe=/Users/ilker/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0526 05:00:32.416063 72217 main.go:128] libmachine: (minikube) DBG | hyperkit pid from json: 5264 I0526 05:00:32.416459 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.417476 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.417497 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.418050 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.418251 72217 main.go:128] libmachine: (minikube) Calling .GetState I0526 05:00:32.418488 72217 main.go:128] libmachine: (minikube) DBG | exe=/Users/ilker/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0526 05:00:32.418791 72217 main.go:128] libmachine: (minikube) DBG | hyperkit pid from json: 5264 I0526 05:00:32.419286 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62821 I0526 05:00:32.419440 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:32.429731 72217 out.go:170] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0526 05:00:32.420066 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.421832 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:32.422785 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62825 I0526 05:00:32.428524 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62828 I0526 05:00:32.430017 72217 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml I0526 05:00:32.430026 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0526 05:00:32.430047 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:32.430541 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:32.451298 72217 out.go:170] ▪ Using image cryptexlabs/minikube-ingress-dns:0.3.0 I0526 05:00:32.430830 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.430830 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.431016 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.435936 72217 api_server.go:249] https://192.168.168.20:8443/healthz returned 200: ok I0526 05:00:32.451436 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.451479 72217 addons.go:261] installing /etc/kubernetes/addons/ingress-dns-pod.yaml I0526 05:00:32.451491 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2631 bytes) I0526 05:00:32.451510 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:32.451590 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:32.451799 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:32.451900 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:32.452114 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.452127 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.452148 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:32.452197 72217 sshutil.go:53] new ssh client: &{IP:192.168.168.20 Port:22 SSHKeyPath:/Users/ilker/.minikube/machines/minikube/id_rsa Username:docker} I0526 05:00:32.452225 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.452305 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.452315 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.452368 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:32.452429 72217 main.go:128] libmachine: (minikube) Calling .GetState I0526 05:00:32.452566 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.452609 72217 sshutil.go:53] new ssh client: &{IP:192.168.168.20 Port:22 SSHKeyPath:/Users/ilker/.minikube/machines/minikube/id_rsa Username:docker} I0526 05:00:32.452639 72217 main.go:128] libmachine: (minikube) DBG | exe=/Users/ilker/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0526 05:00:32.452769 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.452807 72217 main.go:128] libmachine: (minikube) Calling .GetState I0526 05:00:32.452905 72217 main.go:128] libmachine: (minikube) DBG | hyperkit pid from json: 5264 I0526 05:00:32.452938 72217 main.go:128] libmachine: (minikube) Calling .GetState I0526 05:00:32.452995 72217 main.go:128] libmachine: (minikube) DBG | exe=/Users/ilker/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0526 05:00:32.453141 72217 main.go:128] libmachine: (minikube) DBG | exe=/Users/ilker/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0526 05:00:32.453183 72217 main.go:128] libmachine: (minikube) DBG | hyperkit pid from json: 5264 I0526 05:00:32.453313 72217 main.go:128] libmachine: (minikube) DBG | hyperkit pid from json: 5264 I0526 05:00:32.456141 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:32.456141 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:32.485939 72217 out.go:170] ▪ Using image upmcenterprises/registry-creds:1.10 I0526 05:00:32.456846 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:32.458916 72217 api_server.go:139] control plane version: v1.20.2 I0526 05:00:32.486036 72217 api_server.go:129] duration metric: took 95.716488ms to wait for apiserver health ... I0526 05:00:32.472128 72217 out.go:170] ▪ Using image kubernetesui/metrics-scraper:v1.0.4 I0526 05:00:32.486062 72217 system_pods.go:43] waiting for kube-system pods to appear ... I0526 05:00:32.495846 72217 out.go:170] ▪ Using image kubernetesui/dashboard:v2.1.0 I0526 05:00:32.486224 72217 addons.go:261] installing /etc/kubernetes/addons/registry-creds-rc.yaml I0526 05:00:32.495903 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3296 bytes) I0526 05:00:32.495955 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:32.509426 72217 out.go:170] ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0 I0526 05:00:32.495975 72217 addons.go:261] installing /etc/kubernetes/addons/dashboard-ns.yaml I0526 05:00:32.509477 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes) I0526 05:00:32.509501 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:32.502000 72217 system_pods.go:59] 9 kube-system pods found I0526 05:00:32.502508 72217 addons.go:131] Setting addon default-storageclass=true in "minikube" I0526 05:00:32.509546 72217 system_pods.go:61] "coredns-74ff55c5b-58595" [85330fb3-6f1e-4173-b959-582825e2867c] Running W0526 05:00:32.509551 72217 addons.go:140] addon default-storageclass should already be in state true I0526 05:00:32.509556 72217 system_pods.go:61] "etcd-minikube" [b90eb0f9-b3fb-4a3b-ae68-345381f70fe7] Running I0526 05:00:32.509567 72217 host.go:66] Checking if "minikube" exists ... I0526 05:00:32.531540 72217 out.go:170] ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1 I0526 05:00:32.509571 72217 system_pods.go:61] "kube-apiserver-minikube" [283d8e30-d582-4b5b-9b1b-8dbf8c77e28b] Running I0526 05:00:32.531574 72217 system_pods.go:61] "kube-controller-manager-minikube" [09c021dd-3fe4-4e09-b2b1-7708da59811d] Running I0526 05:00:32.509699 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:32.531594 72217 system_pods.go:61] "kube-ingress-dns-minikube" [cb8ada5c-5da0-435c-9df6-77b779640382] Running I0526 05:00:32.531600 72217 system_pods.go:61] "kube-proxy-cgnzx" [af7732fb-4085-4d7f-bc17-af242ab53b3d] Running I0526 05:00:32.531605 72217 system_pods.go:61] "kube-scheduler-minikube" [e0c8c19b-d079-45dc-a29a-9f87a175c070] Running I0526 05:00:32.531610 72217 system_pods.go:61] "registry-creds-85b974c7d7-x25t5" [b411efc7-7721-4888-a6e5-c8ced3268fd0] Running I0526 05:00:32.531616 72217 system_pods.go:61] "storage-provisioner" [b2c20081-b396-4f71-8bb6-3aa583fda7cf] Running I0526 05:00:32.509713 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:32.531622 72217 system_pods.go:74] duration metric: took 45.552957ms to wait for pod list to return data ... I0526 05:00:32.531634 72217 kubeadm.go:538] duration metric: took 214.474909ms to wait for : map[apiserver:true system_pods:true] ... I0526 05:00:32.531649 72217 node_conditions.go:102] verifying NodePressure condition ... I0526 05:00:32.510062 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.547893 72217 out.go:170] ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1 I0526 05:00:32.531760 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.531884 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:32.531930 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:32.542998 72217 node_conditions.go:122] node storage ephemeral capacity is 45604772Ki I0526 05:00:32.548009 72217 addons.go:261] installing /etc/kubernetes/addons/ingress-configmap.yaml I0526 05:00:32.548018 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-configmap.yaml (1865 bytes) I0526 05:00:32.548031 72217 node_conditions.go:123] node cpu capacity is 2 I0526 05:00:32.548069 72217 node_conditions.go:105] duration metric: took 16.414926ms to run NodePressure ... I0526 05:00:32.548054 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:32.548082 72217 start.go:206] waiting for startup goroutines ... I0526 05:00:32.548239 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:32.550568 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:32.550631 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:32.550638 72217 sshutil.go:53] new ssh client: &{IP:192.168.168.20 Port:22 SSHKeyPath:/Users/ilker/.minikube/machines/minikube/id_rsa Username:docker} I0526 05:00:32.551009 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:32.551134 72217 sshutil.go:53] new ssh client: &{IP:192.168.168.20 Port:22 SSHKeyPath:/Users/ilker/.minikube/machines/minikube/id_rsa Username:docker} I0526 05:00:32.551182 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:32.552674 72217 sshutil.go:53] new ssh client: &{IP:192.168.168.20 Port:22 SSHKeyPath:/Users/ilker/.minikube/machines/minikube/id_rsa Username:docker} I0526 05:00:32.564244 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62838 I0526 05:00:32.565363 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.566165 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.566184 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.566603 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.567561 72217 main.go:128] libmachine: Found binary path at /Users/ilker/.minikube/bin/docker-machine-driver-hyperkit I0526 05:00:32.567615 72217 main.go:128] libmachine: Launching plugin server for driver hyperkit I0526 05:00:32.571075 72217 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0526 05:00:32.585306 72217 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:62842 I0526 05:00:32.586086 72217 main.go:128] libmachine: () Calling .GetVersion I0526 05:00:32.587079 72217 main.go:128] libmachine: Using API Version 1 I0526 05:00:32.587107 72217 main.go:128] libmachine: () Calling .SetConfigRaw I0526 05:00:32.587600 72217 main.go:128] libmachine: () Calling .GetMachineName I0526 05:00:32.587842 72217 main.go:128] libmachine: (minikube) Calling .GetState I0526 05:00:32.588051 72217 main.go:128] libmachine: (minikube) DBG | exe=/Users/ilker/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0526 05:00:32.588267 72217 main.go:128] libmachine: (minikube) DBG | hyperkit pid from json: 5264 I0526 05:00:32.590249 72217 main.go:128] libmachine: (minikube) Calling .DriverName I0526 05:00:32.590568 72217 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml I0526 05:00:32.590576 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0526 05:00:32.590593 72217 main.go:128] libmachine: (minikube) Calling .GetSSHHostname I0526 05:00:32.590734 72217 main.go:128] libmachine: (minikube) Calling .GetSSHPort I0526 05:00:32.590867 72217 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath I0526 05:00:32.591016 72217 main.go:128] libmachine: (minikube) Calling .GetSSHUsername I0526 05:00:32.591163 72217 sshutil.go:53] new ssh client: &{IP:192.168.168.20 Port:22 SSHKeyPath:/Users/ilker/.minikube/machines/minikube/id_rsa Username:docker} I0526 05:00:32.599392 72217 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml I0526 05:00:32.704078 72217 addons.go:261] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml I0526 05:00:32.704092 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes) I0526 05:00:32.731167 72217 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml I0526 05:00:32.770172 72217 addons.go:261] installing /etc/kubernetes/addons/ingress-rbac.yaml I0526 05:00:32.770194 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-rbac.yaml (6005 bytes) I0526 05:00:32.778142 72217 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0526 05:00:32.787678 72217 addons.go:261] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml I0526 05:00:32.787688 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes) I0526 05:00:32.841244 72217 addons.go:261] installing /etc/kubernetes/addons/dashboard-configmap.yaml I0526 05:00:32.841254 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes) I0526 05:00:32.841960 72217 addons.go:261] installing /etc/kubernetes/addons/ingress-dp.yaml I0526 05:00:32.841966 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/ingress-dp.yaml (9394 bytes) I0526 05:00:32.882741 72217 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml I0526 05:00:32.966499 72217 addons.go:261] installing /etc/kubernetes/addons/dashboard-dp.yaml I0526 05:00:32.966511 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4278 bytes) I0526 05:00:33.088809 72217 addons.go:261] installing /etc/kubernetes/addons/dashboard-role.yaml I0526 05:00:33.088820 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes) I0526 05:00:33.314968 72217 addons.go:261] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml I0526 05:00:33.314978 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes) I0526 05:00:33.448317 72217 addons.go:261] installing /etc/kubernetes/addons/dashboard-sa.yaml I0526 05:00:33.448327 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes) I0526 05:00:33.537720 72217 addons.go:261] installing /etc/kubernetes/addons/dashboard-secret.yaml I0526 05:00:33.537731 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1401 bytes) I0526 05:00:33.619550 72217 addons.go:261] installing /etc/kubernetes/addons/dashboard-svc.yaml I0526 05:00:33.619566 72217 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes) I0526 05:00:33.714496 72217 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml I0526 05:00:36.080880 72217 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.509740516s) I0526 05:00:36.080927 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:36.080938 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:36.081272 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:36.081285 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:36.081302 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:36.081310 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:36.081720 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:36.081731 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:36.539347 72217 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.939876181s) I0526 05:00:36.539390 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:36.539424 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:36.539770 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:36.539782 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:36.539791 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:36.539802 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:36.540112 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:36.540130 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:36.940855 72217 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.162670679s) I0526 05:00:36.940890 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:36.940898 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:36.941189 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:36.941194 72217 main.go:128] libmachine: (minikube) DBG | Closing plugin on server side I0526 05:00:36.941215 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:36.941232 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:36.941238 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:36.941456 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:36.941469 72217 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.210257033s) I0526 05:00:36.941466 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:36.941482 72217 main.go:128] libmachine: (minikube) DBG | Closing plugin on server side I0526 05:00:36.941486 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:36.941489 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:36.941493 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:36.941506 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:36.941865 72217 main.go:128] libmachine: (minikube) DBG | Closing plugin on server side I0526 05:00:36.941987 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:36.941998 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:36.942016 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:36.942026 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:36.942027 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:36.942036 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:36.942483 72217 main.go:128] libmachine: (minikube) DBG | Closing plugin on server side I0526 05:00:36.942567 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:36.942582 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:37.181277 72217 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: (4.298466487s) I0526 05:00:37.181284 72217 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.46674702s) I0526 05:00:37.181317 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:37.181329 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:37.181341 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:37.181350 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:37.181633 72217 main.go:128] libmachine: (minikube) DBG | Closing plugin on server side I0526 05:00:37.181668 72217 main.go:128] libmachine: (minikube) DBG | Closing plugin on server side I0526 05:00:37.181681 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:37.181703 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:37.181719 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:37.181720 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:37.181730 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:37.181731 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:37.181739 72217 main.go:128] libmachine: Making call to close driver server I0526 05:00:37.181747 72217 main.go:128] libmachine: (minikube) Calling .Close I0526 05:00:37.181979 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:37.181985 72217 main.go:128] libmachine: (minikube) DBG | Closing plugin on server side I0526 05:00:37.182009 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:37.182013 72217 main.go:128] libmachine: Successfully made call to close driver server I0526 05:00:37.182020 72217 main.go:128] libmachine: Making call to close connection to plugin binary I0526 05:00:37.182041 72217 addons.go:299] Verifying addon ingress=true in "minikube" I0526 05:00:37.192588 72217 out.go:170] 🔎 Verifying ingress addon... I0526 05:00:37.196607 72217 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ... I0526 05:00:37.202957 72217 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx I0526 05:00:37.202966 72217 kapi.go:108] duration metric: took 6.364592ms to wait for app.kubernetes.io/name=ingress-nginx ... I0526 05:00:37.230464 72217 out.go:170] 🌟 Enabled addons: storage-provisioner, ingress-dns, default-storageclass, registry-creds, dashboard, ingress I0526 05:00:37.230512 72217 addons.go:330] enableAddons completed in 4.913312168s I0526 05:00:37.591004 72217 start.go:460] kubectl: 1.19.7, cluster: 1.20.2 (minor skew: 1) I0526 05:00:37.601529 72217 out.go:170] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Logs begin at Tue 2021-05-25 00:45:00 UTC, end at Wed 2021-05-26 02:38:08 UTC. -- May 26 02:25:03 minikube dockerd[2206]: time="2021-05-26T02:25:03.875845297Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/232dde40485a9fc9ecc5471d7f3207dfa532d323ec3b19fcd57a70a85e4847cd pid=330191 May 26 02:25:05 minikube dockerd[2206]: time="2021-05-26T02:25:05.239991624Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/e26daad9043e6265f5126159c9454b48175de4c253c75d044fceb4929db9c992 pid=330261 May 26 02:25:05 minikube dockerd[2206]: time="2021-05-26T02:25:05.634302395Z" level=info msg="shim disconnected" id=e26daad9043e6265f5126159c9454b48175de4c253c75d044fceb4929db9c992 May 26 02:25:05 minikube dockerd[2199]: time="2021-05-26T02:25:05.635818997Z" level=info msg="ignoring event" container=e26daad9043e6265f5126159c9454b48175de4c253c75d044fceb4929db9c992 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:25:06 minikube dockerd[2199]: time="2021-05-26T02:25:06.142737689Z" level=info msg="ignoring event" container=232dde40485a9fc9ecc5471d7f3207dfa532d323ec3b19fcd57a70a85e4847cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:25:06 minikube dockerd[2206]: time="2021-05-26T02:25:06.146810348Z" level=info msg="shim disconnected" id=232dde40485a9fc9ecc5471d7f3207dfa532d323ec3b19fcd57a70a85e4847cd May 26 02:26:04 minikube dockerd[2206]: time="2021-05-26T02:26:04.559759211Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c31184075577ac785b418a57cf2c7aafb7243f7c1552ae2af23de685a6c0c163 pid=330595 May 26 02:26:05 minikube dockerd[2206]: time="2021-05-26T02:26:05.392136599Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4ff801d9abeeec05d7178d783e0402e1b8de6b37caac440315fb61ba8ca9a1cc pid=330672 May 26 02:26:05 minikube dockerd[2199]: time="2021-05-26T02:26:05.725547704Z" level=info msg="ignoring event" container=4ff801d9abeeec05d7178d783e0402e1b8de6b37caac440315fb61ba8ca9a1cc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:26:05 minikube dockerd[2206]: time="2021-05-26T02:26:05.726539804Z" level=info msg="shim disconnected" id=4ff801d9abeeec05d7178d783e0402e1b8de6b37caac440315fb61ba8ca9a1cc May 26 02:26:06 minikube dockerd[2199]: time="2021-05-26T02:26:06.832087226Z" level=info msg="ignoring event" container=c31184075577ac785b418a57cf2c7aafb7243f7c1552ae2af23de685a6c0c163 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:26:06 minikube dockerd[2206]: time="2021-05-26T02:26:06.834537691Z" level=info msg="shim disconnected" id=c31184075577ac785b418a57cf2c7aafb7243f7c1552ae2af23de685a6c0c163 May 26 02:28:05 minikube dockerd[2206]: time="2021-05-26T02:28:05.640889384Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ab77f67fc10cc149235b3c50decb79a1a4cb31cbfe40f455a85e2d369b367678 pid=331210 May 26 02:28:06 minikube dockerd[2206]: time="2021-05-26T02:28:06.575188173Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/081dd30eef209f2240aec670a83665c5c9740738b491931614e02f0c2093da96 pid=331287 May 26 02:28:06 minikube dockerd[2199]: time="2021-05-26T02:28:06.897171099Z" level=info msg="ignoring event" container=081dd30eef209f2240aec670a83665c5c9740738b491931614e02f0c2093da96 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:28:06 minikube dockerd[2206]: time="2021-05-26T02:28:06.897880773Z" level=info msg="shim disconnected" id=081dd30eef209f2240aec670a83665c5c9740738b491931614e02f0c2093da96 May 26 02:28:07 minikube dockerd[2199]: time="2021-05-26T02:28:07.685826668Z" level=info msg="ignoring event" container=ab77f67fc10cc149235b3c50decb79a1a4cb31cbfe40f455a85e2d369b367678 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:28:07 minikube dockerd[2206]: time="2021-05-26T02:28:07.693040393Z" level=info msg="shim disconnected" id=ab77f67fc10cc149235b3c50decb79a1a4cb31cbfe40f455a85e2d369b367678 May 26 02:30:06 minikube dockerd[2206]: time="2021-05-26T02:30:06.905282782Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ae1ff8be11204f75279b46951844f8f6389114791606ae6315c6ef3e6e8b3777 pid=331858 May 26 02:30:06 minikube dockerd[2206]: time="2021-05-26T02:30:06.962331543Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f09b8676a91accac9ae7ab4099c23a3d4127ba3c53561c960f712e6c76cff269 pid=331883 May 26 02:30:08 minikube dockerd[2206]: time="2021-05-26T02:30:08.130074777Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/44bb6d23eb9185409db2c9bc064152b0433088aa58aad891c83f67d06955a55b pid=332001 May 26 02:30:08 minikube dockerd[2206]: time="2021-05-26T02:30:08.327512209Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1bbcba9dff3127ba1033c4666207b0813cb0357880a6c5bd9d93cbcf7704c31e pid=332039 May 26 02:30:08 minikube dockerd[2199]: time="2021-05-26T02:30:08.636018179Z" level=info msg="ignoring event" container=44bb6d23eb9185409db2c9bc064152b0433088aa58aad891c83f67d06955a55b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:30:08 minikube dockerd[2206]: time="2021-05-26T02:30:08.649138315Z" level=info msg="shim disconnected" id=44bb6d23eb9185409db2c9bc064152b0433088aa58aad891c83f67d06955a55b May 26 02:30:08 minikube dockerd[2199]: time="2021-05-26T02:30:08.816638023Z" level=info msg="ignoring event" container=1bbcba9dff3127ba1033c4666207b0813cb0357880a6c5bd9d93cbcf7704c31e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:30:08 minikube dockerd[2206]: time="2021-05-26T02:30:08.817956364Z" level=info msg="shim disconnected" id=1bbcba9dff3127ba1033c4666207b0813cb0357880a6c5bd9d93cbcf7704c31e May 26 02:30:09 minikube dockerd[2199]: time="2021-05-26T02:30:09.832815979Z" level=info msg="ignoring event" container=ae1ff8be11204f75279b46951844f8f6389114791606ae6315c6ef3e6e8b3777 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:30:09 minikube dockerd[2206]: time="2021-05-26T02:30:09.834153810Z" level=info msg="shim disconnected" id=ae1ff8be11204f75279b46951844f8f6389114791606ae6315c6ef3e6e8b3777 May 26 02:30:09 minikube dockerd[2199]: time="2021-05-26T02:30:09.945279519Z" level=info msg="ignoring event" container=f09b8676a91accac9ae7ab4099c23a3d4127ba3c53561c960f712e6c76cff269 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:30:09 minikube dockerd[2206]: time="2021-05-26T02:30:09.949664025Z" level=info msg="shim disconnected" id=f09b8676a91accac9ae7ab4099c23a3d4127ba3c53561c960f712e6c76cff269 May 26 02:32:08 minikube dockerd[2206]: time="2021-05-26T02:32:08.217624455Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9c279e1428831d57171f72cca9d42b19db79642e8a47f222e7e6b2308779dd65 pid=332640 May 26 02:32:09 minikube dockerd[2206]: time="2021-05-26T02:32:09.345883344Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/064137b6a7e2347224b2270c057d26f442b364823bf3c614b694c02303e49054 pid=332718 May 26 02:32:09 minikube dockerd[2199]: time="2021-05-26T02:32:09.615813111Z" level=info msg="ignoring event" container=064137b6a7e2347224b2270c057d26f442b364823bf3c614b694c02303e49054 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:32:09 minikube dockerd[2206]: time="2021-05-26T02:32:09.618656989Z" level=info msg="shim disconnected" id=064137b6a7e2347224b2270c057d26f442b364823bf3c614b694c02303e49054 May 26 02:32:10 minikube dockerd[2199]: time="2021-05-26T02:32:10.454172022Z" level=info msg="ignoring event" container=9c279e1428831d57171f72cca9d42b19db79642e8a47f222e7e6b2308779dd65 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:32:10 minikube dockerd[2206]: time="2021-05-26T02:32:10.457221797Z" level=info msg="shim disconnected" id=9c279e1428831d57171f72cca9d42b19db79642e8a47f222e7e6b2308779dd65 May 26 02:34:09 minikube dockerd[2206]: time="2021-05-26T02:34:09.938402021Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2a140d7710304216a0c6f6c6d6c85c6ca73dee4d54d11903dd40f9c5099e8ea2 pid=333277 May 26 02:34:11 minikube dockerd[2206]: time="2021-05-26T02:34:11.055291724Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1f34599aba6c36602efdb2b8a4cf7103b07a1062aec3766c6aef6e31dc368965 pid=333354 May 26 02:34:11 minikube dockerd[2199]: time="2021-05-26T02:34:11.517587794Z" level=info msg="ignoring event" container=1f34599aba6c36602efdb2b8a4cf7103b07a1062aec3766c6aef6e31dc368965 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:34:11 minikube dockerd[2206]: time="2021-05-26T02:34:11.518171729Z" level=info msg="shim disconnected" id=1f34599aba6c36602efdb2b8a4cf7103b07a1062aec3766c6aef6e31dc368965 May 26 02:34:12 minikube dockerd[2206]: time="2021-05-26T02:34:12.131710766Z" level=info msg="shim disconnected" id=2a140d7710304216a0c6f6c6d6c85c6ca73dee4d54d11903dd40f9c5099e8ea2 May 26 02:34:12 minikube dockerd[2199]: time="2021-05-26T02:34:12.137584240Z" level=info msg="ignoring event" container=2a140d7710304216a0c6f6c6d6c85c6ca73dee4d54d11903dd40f9c5099e8ea2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:35:10 minikube dockerd[2206]: time="2021-05-26T02:35:10.575836572Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5d7f57a0f4a42c217d4403ffb2cb90a108bce59920207f20d48069643379b93d pid=333677 May 26 02:35:11 minikube dockerd[2206]: time="2021-05-26T02:35:11.973358206Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/957622b4b76610d36c6acb4c9664c717f92016ef4908b405bf3d07611e7bf0aa pid=333758 May 26 02:35:12 minikube dockerd[2199]: time="2021-05-26T02:35:12.339877137Z" level=info msg="ignoring event" container=957622b4b76610d36c6acb4c9664c717f92016ef4908b405bf3d07611e7bf0aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:35:12 minikube dockerd[2206]: time="2021-05-26T02:35:12.344028503Z" level=info msg="shim disconnected" id=957622b4b76610d36c6acb4c9664c717f92016ef4908b405bf3d07611e7bf0aa May 26 02:35:12 minikube dockerd[2199]: time="2021-05-26T02:35:12.984171286Z" level=info msg="ignoring event" container=5d7f57a0f4a42c217d4403ffb2cb90a108bce59920207f20d48069643379b93d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:35:12 minikube dockerd[2206]: time="2021-05-26T02:35:12.984804415Z" level=info msg="shim disconnected" id=5d7f57a0f4a42c217d4403ffb2cb90a108bce59920207f20d48069643379b93d May 26 02:36:01 minikube dockerd[2206]: time="2021-05-26T02:36:01.171623108Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/113b214b0a2da3251cfca2194dbdcde79e7cb1b1cd7682e571bb8800822a6e1a pid=334040 May 26 02:36:02 minikube dockerd[2206]: time="2021-05-26T02:36:02.216168227Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9b788f8715521900c0b3225e63d680bcff9e397ba1f678dc105cd917a61d9461 pid=334113 May 26 02:36:02 minikube dockerd[2199]: time="2021-05-26T02:36:02.580113461Z" level=info msg="ignoring event" container=9b788f8715521900c0b3225e63d680bcff9e397ba1f678dc105cd917a61d9461 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:36:02 minikube dockerd[2206]: time="2021-05-26T02:36:02.582685929Z" level=info msg="shim disconnected" id=9b788f8715521900c0b3225e63d680bcff9e397ba1f678dc105cd917a61d9461 May 26 02:36:03 minikube dockerd[2199]: time="2021-05-26T02:36:03.372654854Z" level=info msg="ignoring event" container=113b214b0a2da3251cfca2194dbdcde79e7cb1b1cd7682e571bb8800822a6e1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:36:03 minikube dockerd[2206]: time="2021-05-26T02:36:03.373901110Z" level=info msg="shim disconnected" id=113b214b0a2da3251cfca2194dbdcde79e7cb1b1cd7682e571bb8800822a6e1a May 26 02:38:02 minikube dockerd[2206]: time="2021-05-26T02:38:02.476040094Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0ee74c773654874751731e7741712c5167bd3dbb114ab438b30b1da48a6c4a72 pid=334817 May 26 02:38:03 minikube dockerd[2206]: time="2021-05-26T02:38:03.359740666Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/43da4fe4af91172b6856ec9dfe5956294c7048adea431cf9a55bf7231fb3ce6a pid=334901 May 26 02:38:03 minikube dockerd[2199]: time="2021-05-26T02:38:03.827867239Z" level=info msg="ignoring event" container=43da4fe4af91172b6856ec9dfe5956294c7048adea431cf9a55bf7231fb3ce6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" May 26 02:38:03 minikube dockerd[2206]: time="2021-05-26T02:38:03.834801646Z" level=info msg="shim disconnected" id=43da4fe4af91172b6856ec9dfe5956294c7048adea431cf9a55bf7231fb3ce6a May 26 02:38:04 minikube dockerd[2206]: time="2021-05-26T02:38:04.451846195Z" level=info msg="shim disconnected" id=0ee74c773654874751731e7741712c5167bd3dbb114ab438b30b1da48a6c4a72 May 26 02:38:04 minikube dockerd[2199]: time="2021-05-26T02:38:04.455588692Z" level=info msg="ignoring event" container=0ee74c773654874751731e7741712c5167bd3dbb114ab438b30b1da48a6c4a72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 43da4fe4af911 bbadf782a297d 5 seconds ago Exited warm 0 0ee74c7736548 9b788f8715521 bbadf782a297d 2 minutes ago Exited warm 0 113b214b0a2da 957622b4b7661 bbadf782a297d 2 minutes ago Exited clean 0 5d7f57a0f4a42 1f34599aba6c3 bbadf782a297d 3 minutes ago Exited warm 0 2a140d7710304 064137b6a7e23 bbadf782a297d 5 minutes ago Exited warm 0 9c279e1428831 44bb6d23eb918 bbadf782a297d 8 minutes ago Exited clean 0 ae1ff8be11204 e26daad9043e6 bbadf782a297d 13 minutes ago Exited clean 0 232dde40485a9 d3cc064fde453 3d5a59b7625d9 19 minutes ago Running manager 0 3eebc97b5f259 b88d0b15f14e2 a2fd0654e5bae 40 minutes ago Running registry-creds 0 af3cbfef86caa 563cec3e4ab0c 9a07b5b4bfac0 26 hours ago Running kubernetes-dashboard 0 0e005c079001a 2cfdf70fb6856 86262685d9abb 26 hours ago Running dashboard-metrics-scraper 0 02f1a320c63c0 1d3cea51eb4df quay.io/jetstack/cert-manager-cainjector@sha256:51c0df411b66aa175e9fc6840f3135d55b52c3781d0b3d4aa10862066d460193 26 hours ago Running cert-manager 0 3acf081d0d905 af3e32499df26 quay.io/jetstack/cert-manager-webhook@sha256:41eacd93a30b566b780a6ae525b2547d2a87f1ec5f067fc02840a220aeb0c3f7 26 hours ago Running cert-manager 0 176d903cf1e64 7be76a38d9722 quay.io/jetstack/cert-manager-controller@sha256:22543d32793abc0069680f80ee5be348dcbb3c74c85ba55835b4cf6c76fe18da 26 hours ago Running cert-manager 0 c495cad4b0788 dfbb99fdeca3b k8s.gcr.io/ingress-nginx/controller@sha256:3dd0fac48073beaca2d67a78c746c7593f9c575168a17139a9955a82c63c4b9a 26 hours ago Running controller 0 fddd7b4a0f438 6defb8db0fb2f jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 26 hours ago Exited patch 0 d84feae346a36 baf3e1b7b106f jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 26 hours ago Exited create 0 7bce69b9bef6d 598e224187ff8 cryptexlabs/minikube-ingress-dns@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab 26 hours ago Running minikube-ingress-dns 0 4143940a4e559 c5d0e9cfe1834 6e38f40d628db 26 hours ago Running storage-provisioner 1 baf8a75afab70 88efd69e38724 bfe3a36ebd252 26 hours ago Running coredns 0 f18df517d9d0b 67956e74c157d 6e38f40d628db 26 hours ago Exited storage-provisioner 0 baf8a75afab70 ff291bfad93d9 43154ddb57a83 26 hours ago Running kube-proxy 0 d08795236881f 5e7fbde903473 ed2c44fbdd78b 26 hours ago Running kube-scheduler 0 56df3a62999ff e5139748fba09 a27166429d98e 26 hours ago Running kube-controller-manager 0 9211c895a0e1c 97894bad83f38 0369cf4303ffd 26 hours ago Running etcd 0 069c3cff6158f 38ac5e95b5502 a8c2fdb8bf76e 26 hours ago Running kube-apiserver 0 8de734ce85893 * * ==> coredns [88efd69e3872] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d * * ==> describe nodes <== * Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=15cede53bdc5fe242228853e737333b09d4336b5 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_05_25T03_45_44_0700 minikube.k8s.io/version=v1.19.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Tue, 25 May 2021 00:45:41 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Wed, 26 May 2021 02:38:05 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Wed, 26 May 2021 02:33:55 +0000 Tue, 25 May 2021 00:45:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 26 May 2021 02:33:55 +0000 Tue, 25 May 2021 00:45:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 26 May 2021 02:33:55 +0000 Tue, 25 May 2021 00:45:35 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 26 May 2021 02:33:55 +0000 Tue, 25 May 2021 00:45:46 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.168.20 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 45604772Ki hugepages-2Mi: 0 memory: 3935192Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 45604772Ki hugepages-2Mi: 0 memory: 3935192Ki pods: 110 System Info: Machine ID: 76c3083e7ada4056a2817e0cb314f396 System UUID: 684111eb-0000-0000-ba24-acde48001122 Boot ID: 93a4a0e6-4cf5-483a-b058-36a63ff337a9 Kernel Version: 4.19.171 OS Image: Buildroot 2020.02.10 Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.4 Kubelet Version: v1.20.2 Kube-Proxy Version: v1.20.2 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (20 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- cert-manager cert-manager-7dd5854bb4-x8nq9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25h cert-manager cert-manager-cainjector-64c949654c-bv6kt 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25h cert-manager cert-manager-webhook-6bdffc7c9d-q8cz4 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25h 0%!)(MISSING) 25h ingress-nginx ingress-nginx-controller-5d88495688-xswv5 100m (5%!)(MISSING) 0 (0%!)(MISSING) 90Mi (2%!)(MISSING) 0 (0%!)(MISSING) 25h kube-system coredns-74ff55c5b-58595 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 25h kube-system etcd-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 25h kube-system kube-apiserver-minikube 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25h kube-system kube-controller-manager-minikube 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25h kube-system kube-ingress-dns-minikube 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25h kube-system kube-proxy-cgnzx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25h kube-system kube-scheduler-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25h kube-system registry-creds-85b974c7d7-x25t5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 40m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25h kubernetes-dashboard dashboard-metrics-scraper-f6647bd8c-wxxvw 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25h kubernetes-dashboard kubernetes-dashboard-968bcb79-946gm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 25h (6%!)(MISSING) 22m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 950m (47%!)(MISSING) 100m (5%!)(MISSING) memory 280Mi (7%!)(MISSING) 426Mi (11%!)(MISSING) ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: * * ==> dmesg <== * [May26 00:12] kauditd_printk_skb: 2 callbacks suppressed [May26 00:22] kauditd_printk_skb: 2 callbacks suppressed [May26 00:32] kauditd_printk_skb: 2 callbacks suppressed [May26 00:42] kauditd_printk_skb: 2 callbacks suppressed [May26 00:52] kauditd_printk_skb: 2 callbacks suppressed [May26 01:12] kauditd_printk_skb: 2 callbacks suppressed [May26 01:22] kauditd_printk_skb: 2 callbacks suppressed [May26 01:32] kauditd_printk_skb: 2 callbacks suppressed [May26 01:42] kauditd_printk_skb: 2 callbacks suppressed [May26 01:52] kauditd_printk_skb: 2 callbacks suppressed [May26 01:54] kauditd_printk_skb: 8 callbacks suppressed [May26 02:00] systemd-fstab-generator[320651]: Ignoring "noauto" for root device [ +0.891613] systemd-fstab-generator[320671]: Ignoring "noauto" for root device [ +0.538734] systemd-fstab-generator[320686]: Ignoring "noauto" for root device [May26 02:02] kauditd_printk_skb: 2 callbacks suppressed [May26 02:07] kauditd_printk_skb: 2 callbacks suppressed [ +38.122955] kauditd_printk_skb: 2 callbacks suppressed [May26 02:09] kauditd_printk_skb: 2 callbacks suppressed [May26 02:12] kauditd_printk_skb: 2 callbacks suppressed [May26 02:15] kauditd_printk_skb: 2 callbacks suppressed [May26 02:19] 9pnet: p9_fd_create_tcp (327977): problem connecting socket to 192.168.64.1 [May26 02:20] kauditd_printk_skb: 2 callbacks suppressed [May26 02:22] 9pnet: p9_fd_create_tcp (329303): problem connecting socket to 192.168.64.1 [May26 02:30] 9pnet: p9_fd_create_tcp (331740): problem connecting socket to 192.168.64.1 [May26 02:32] kauditd_printk_skb: 2 callbacks suppressed [ +39.748472] 9pnet: p9_fd_create_tcp (332835): problem connecting socket to 192.168.64.1 * * ==> etcd [97894bad83f3] <== * 2021-05-26 02:30:27.911814 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:30:37.913926 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:30:39.285869 W | etcdserver: read-only range request "key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true " with result "range_response_count:0 size:6" took too long (125.69772ms) to execute 2021-05-26 02:30:39.286905 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (101.712918ms) to execute 2021-05-26 02:30:47.910499 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:30:57.915092 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:31:07.913198 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:31:17.911829 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:31:27.910986 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:31:37.912463 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:31:47.911209 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:31:57.911331 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:32:07.916844 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:32:17.912843 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:32:27.915737 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:32:37.911737 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:32:47.910931 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:32:54.419310 I | mvcc: store.index: compact 86964 2021-05-26 02:32:54.538345 I | mvcc: finished scheduled compaction at 86964 (took 115.523709ms) 2021-05-26 02:32:57.730227 W | etcdserver: read-only range request "key:\"/registry/configmaps/ingress-nginx/ingress-controller-leader-nginx\" " with result "range_response_count:1 size:614" took too long (112.332807ms) to execute 2021-05-26 02:32:57.911046 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:33:07.909904 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:33:17.910756 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:33:18.187843 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/cert-manager-controller\" " with result "range_response_count:1 size:601" took too long (138.625958ms) to execute 2021-05-26 02:33:27.911373 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:33:37.912437 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:33:47.912087 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:33:57.912153 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:34:07.914760 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:34:17.912708 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:34:22.012602 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1110" took too long (309.661248ms) to execute 2021-05-26 02:34:22.014287 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true " with result "range_response_count:0 size:8" took too long (112.75586ms) to execute 2021-05-26 02:34:26.677254 W | etcdserver: read-only range request "key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true " with result "range_response_count:0 size:8" took too long (135.755645ms) to execute 2021-05-26 02:34:27.918782 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:34:37.912795 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:34:47.911566 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:34:57.910272 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:35:07.921060 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:35:17.913753 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:35:27.912214 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:35:37.912359 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:35:47.910082 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:35:57.911024 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:36:07.910287 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:36:17.910806 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:36:27.913926 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:36:37.913507 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:36:47.910945 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:36:57.911347 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:37:07.914027 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:37:17.912294 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:37:27.919301 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:37:37.918269 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:37:47.914630 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:37:54.427243 I | mvcc: store.index: compact 88284 2021-05-26 02:37:54.457074 I | mvcc: finished scheduled compaction at 88284 (took 28.579781ms) 2021-05-26 02:37:57.910870 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-05-26 02:38:07.922004 I | etcdserver/api/etcdhttp: /health OK (status code 200) * * ==> kernel <== * 02:38:09 up 14:16, 0 users, load average: 1.87, 1.84, 1.86 Linux minikube 4.19.171 #1 SMP Fri Apr 9 22:56:47 UTC 2021 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2020.02.10" * * ==> kube-apiserver [38ac5e95b550] <== * I0526 02:26:23.013845 1 client.go:360] parsed scheme: "passthrough" I0526 02:26:23.014301 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:26:23.014794 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:26:57.939543 1 client.go:360] parsed scheme: "passthrough" I0526 02:26:57.939668 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:26:57.939721 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:27:33.980998 1 client.go:360] parsed scheme: "passthrough" I0526 02:27:33.981091 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:27:33.981127 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:28:09.625326 1 client.go:360] parsed scheme: "passthrough" I0526 02:28:09.625583 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:28:09.625723 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:28:53.804343 1 client.go:360] parsed scheme: "passthrough" I0526 02:28:53.804402 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:28:53.804544 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:29:24.986295 1 client.go:360] parsed scheme: "passthrough" I0526 02:29:24.990061 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:29:24.990135 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:30:09.578736 1 client.go:360] parsed scheme: "passthrough" I0526 02:30:09.578821 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:30:09.578854 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:30:54.124545 1 client.go:360] parsed scheme: "passthrough" I0526 02:30:54.124640 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:30:54.124678 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:31:37.197983 1 client.go:360] parsed scheme: "passthrough" I0526 02:31:37.198058 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:31:37.198086 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:32:11.815653 1 client.go:360] parsed scheme: "passthrough" I0526 02:32:11.815706 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:32:11.815740 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:32:46.151663 1 client.go:360] parsed scheme: "passthrough" I0526 02:32:46.152136 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:32:46.152382 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:33:16.475305 1 client.go:360] parsed scheme: "passthrough" I0526 02:33:16.475464 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:33:16.475494 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:33:53.257186 1 client.go:360] parsed scheme: "passthrough" I0526 02:33:53.257303 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:33:53.257337 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:34:24.154369 1 client.go:360] parsed scheme: "passthrough" I0526 02:34:24.154568 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:34:24.154603 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:35:08.453575 1 client.go:360] parsed scheme: "passthrough" I0526 02:35:08.453673 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:35:08.453705 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:35:39.273596 1 client.go:360] parsed scheme: "passthrough" I0526 02:35:39.273665 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:35:39.273694 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:36:14.086161 1 client.go:360] parsed scheme: "passthrough" I0526 02:36:14.086866 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:36:14.087135 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:36:56.732578 1 client.go:360] parsed scheme: "passthrough" I0526 02:36:56.732945 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:36:56.733115 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:37:28.142353 1 client.go:360] parsed scheme: "passthrough" I0526 02:37:28.142520 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:37:28.142543 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0526 02:38:04.802541 1 client.go:360] parsed scheme: "passthrough" I0526 02:38:04.802637 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0526 02:38:04.802678 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * * ==> kube-controller-manager [e5139748fba0] <== * * ==> kube-proxy [ff291bfad93d] <== * I0525 00:46:01.343616 1 node.go:172] Successfully retrieved node IP: 192.168.168.20 I0525 00:46:01.343874 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.168.20), assume IPv4 operation W0525 00:46:01.361646 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy I0525 00:46:01.361791 1 server_others.go:185] Using iptables Proxier. I0525 00:46:01.362258 1 server.go:650] Version: v1.20.2 I0525 00:46:01.362575 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I0525 00:46:01.362692 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0525 00:46:01.363369 1 conntrack.go:83] Setting conntrack hashsize to 32768 I0525 00:46:01.368424 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0525 00:46:01.368668 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0525 00:46:01.369118 1 config.go:315] Starting service config controller I0525 00:46:01.369242 1 shared_informer.go:240] Waiting for caches to sync for service config I0525 00:46:01.369386 1 config.go:224] Starting endpoint slice config controller I0525 00:46:01.369429 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0525 00:46:01.469475 1 shared_informer.go:247] Caches are synced for service config I0525 00:46:01.469676 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [5e7fbde90347] <== * I0525 00:45:35.563992 1 serving.go:331] Generated self-signed cert in-memory W0525 00:45:41.469793 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0525 00:45:41.469814 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0525 00:45:41.469821 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous. W0525 00:45:41.469826 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0525 00:45:41.534995 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I0525 00:45:41.549063 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0525 00:45:41.549345 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0525 00:45:41.560038 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file E0525 00:45:41.549720 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0525 00:45:41.549800 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0525 00:45:41.549860 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0525 00:45:41.549925 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0525 00:45:41.549983 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0525 00:45:41.559781 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0525 00:45:41.559840 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0525 00:45:41.559913 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0525 00:45:41.559971 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0525 00:45:41.574312 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0525 00:45:41.575483 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0525 00:45:41.575661 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0525 00:45:42.428433 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0525 00:45:42.520023 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0525 00:45:42.764127 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0525 00:45:45.567840 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Tue 2021-05-25 00:45:00 UTC, end at Wed 2021-05-26 02:38:10 UTC. -- May 26 02:30:09 minikube kubelet[3806]: I0526 02:30:09.853072 3806 reconciler.go:319] Volume detached for volume "default-token-skxsg" (UniqueName: "kubernetes.io/secret/680eb114-d389-412d-a8d6-7ffb208ec061-default-token-skxsg") on node "minikube" DevicePath "" May 26 02:30:09 minikube kubelet[3806]: I0526 02:30:09.853120 3806 reconciler.go:319] Volume detached for volume "default-token-skxsg" (UniqueName: "kubernetes.io/secret/7d49aaac-758a-4267-b91d-a198496cd245-default-token-skxsg") on node "minikube" DevicePath "" May 26 02:30:10 minikube kubelet[3806]: W0526 02:30:10.600279 3806 pod_container_deletor.go:79] Container "f09b8676a91accac9ae7ab4099c23a3d4127ba3c53561c960f712e6c76cff269" not found in pod's containers May 26 02:30:10 minikube kubelet[3806]: W0526 02:30:10.631992 3806 pod_container_deletor.go:79] Container "ae1ff8be11204f75279b46951844f8f6389114791606ae6315c6ef3e6e8b3777" not found in pod's containers May 26 02:30:31 minikube kubelet[3806]: I0526 02:30:31.254113 3806 scope.go:95] [topologymanager] RemoveContainer - Container ID: 18adb887d9dccc64b0f0b5a404124ca6bf31bb2ead930ea052e56b4971caca99 May 26 02:30:31 minikube kubelet[3806]: I0526 02:30:31.350898 3806 scope.go:95] [topologymanager] RemoveContainer - Container ID: b9ed42e5aa1deb0effeda2e0893e5bc271644257a353e6a7b0daf393af8ce454 May 26 02:32:07 minikube kubelet[3806]: I0526 02:32:07.692226 3806 topology_manager.go:187] [topologymanager] Topology Admit Handler May 26 02:32:09 minikube kubelet[3806]: W0526 02:32:09.133860 3806 pod_container_deletor.go:79] Container "9c279e1428831d57171f72cca9d42b19db79642e8a47f222e7e6b2308779dd65" not found in pod's containers May 26 02:32:10 minikube kubelet[3806]: I0526 02:32:10.226803 3806 scope.go:95] [topologymanager] RemoveContainer - Container ID: 064137b6a7e2347224b2270c057d26f442b364823bf3c614b694c02303e49054 May 26 02:32:10 minikube kubelet[3806]: I0526 02:32:10.404716 3806 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-skxsg" (UniqueName: "kubernetes.io/secret/b9018559-b711-453b-99ad-d0420baee189-default-token-skxsg") pod "b9018559-b711-453b-99ad-d0420baee189" (UID: "b9018559-b711-453b-99ad-d0420baee189") May 26 02:32:10 minikube kubelet[3806]: I0526 02:32:10.417158 3806 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9018559-b711-453b-99ad-d0420baee189-default-token-skxsg" (OuterVolumeSpecName: "default-token-skxsg") pod "b9018559-b711-453b-99ad-d0420baee189" (UID: "b9018559-b711-453b-99ad-d0420baee189"). InnerVolumeSpecName "default-token-skxsg". PluginName "kubernetes.io/secret", VolumeGidValue "" May 26 02:32:10 minikube kubelet[3806]: I0526 02:32:10.523060 3806 reconciler.go:319] Volume detached for volume "default-token-skxsg" (UniqueName: "kubernetes.io/secret/b9018559-b711-453b-99ad-d0420baee189-default-token-skxsg") on node "minikube" DevicePath "" May 26 02:32:11 minikube kubelet[3806]: W0526 02:32:11.268794 3806 pod_container_deletor.go:79] Container "9c279e1428831d57171f72cca9d42b19db79642e8a47f222e7e6b2308779dd65" not found in pod's containers May 26 02:32:31 minikube kubelet[3806]: I0526 02:32:31.756307 3806 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4ff801d9abeeec05d7178d783e0402e1b8de6b37caac440315fb61ba8ca9a1cc May 26 02:34:09 minikube kubelet[3806]: I0526 02:34:09.349826 3806 topology_manager.go:187] [topologymanager] Topology Admit Handler May 26 02:34:10 minikube kubelet[3806]: W0526 02:34:10.854577 3806 pod_container_deletor.go:79] Container "2a140d7710304216a0c6f6c6d6c85c6ca73dee4d54d11903dd40f9c5099e8ea2" not found in pod's containers May 26 02:34:11 minikube kubelet[3806]: I0526 02:34:11.965196 3806 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1f34599aba6c36602efdb2b8a4cf7103b07a1062aec3766c6aef6e31dc368965 May 26 02:34:12 minikube kubelet[3806]: I0526 02:34:12.180865 3806 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-skxsg" (UniqueName: "kubernetes.io/secret/b3cd055f-17b7-4edc-b370-8d3f43426016-default-token-skxsg") pod "b3cd055f-17b7-4edc-b370-8d3f43426016" (UID: "b3cd055f-17b7-4edc-b370-8d3f43426016") May 26 02:34:12 minikube kubelet[3806]: I0526 02:34:12.216135 3806 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3cd055f-17b7-4edc-b370-8d3f43426016-default-token-skxsg" (OuterVolumeSpecName: "default-token-skxsg") pod "b3cd055f-17b7-4edc-b370-8d3f43426016" (UID: "b3cd055f-17b7-4edc-b370-8d3f43426016"). InnerVolumeSpecName "default-token-skxsg". PluginName "kubernetes.io/secret", VolumeGidValue "" May 26 02:34:12 minikube kubelet[3806]: I0526 02:34:12.284755 3806 reconciler.go:319] Volume detached for volume "default-token-skxsg" (UniqueName: "kubernetes.io/secret/b3cd055f-17b7-4edc-b370-8d3f43426016-default-token-skxsg") on node "minikube" DevicePath "" May 26 02:34:13 minikube kubelet[3806]: W0526 02:34:13.049721 3806 pod_container_deletor.go:79] Container "2a140d7710304216a0c6f6c6d6c85c6ca73dee4d54d11903dd40f9c5099e8ea2" not found in pod's containers May 26 02:34:32 minikube kubelet[3806]: I0526 02:34:32.045290 3806 scope.go:95] [topologymanager] RemoveContainer - Container ID: 081dd30eef209f2240aec670a83665c5c9740738b491931614e02f0c2093da96 May 26 02:35:09 minikube kubelet[3806]: I0526 02:35:09.963774 3806 topology_manager.go:187] [topologymanager] Topology Admit Handler May 26 02:35:11 minikube kubelet[3806]: W0526 02:35:11.801707 3806 pod_container_deletor.go:79] Container "5d7f57a0f4a42c217d4403ffb2cb90a108bce59920207f20d48069643379b93d" not found in pod's containers May 26 02:35:12 minikube kubelet[3806]: I0526 02:35:12.865733 3806 scope.go:95] [topologymanager] RemoveContainer - Container ID: 957622b4b76610d36c6acb4c9664c717f92016ef4908b405bf3d07611e7bf0aa May 26 02:35:12 minikube kubelet[3806]: I0526 02:35:12.997670 3806 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-skxsg" (UniqueName: "kubernetes.io/secret/85467a1d-15c5-42d0-b447-436e885feb2e-default-token-skxsg") pod "85467a1d-15c5-42d0-b447-436e885feb2e" (UID: "85467a1d-15c5-42d0-b447-436e885feb2e") May 26 02:35:13 minikube kubelet[3806]: I0526 02:35:13.021977 3806 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85467a1d-15c5-42d0-b447-436e885feb2e-default-token-skxsg" (OuterVolumeSpecName: "default-token-skxsg") pod "85467a1d-15c5-42d0-b447-436e885feb2e" (UID: "85467a1d-15c5-42d0-b447-436e885feb2e"). InnerVolumeSpecName "default-token-skxsg". PluginName "kubernetes.io/secret", VolumeGidValue "" May 26 02:35:13 minikube kubelet[3806]: I0526 02:35:13.097916 3806 reconciler.go:319] Volume detached for volume "default-token-skxsg" (UniqueName: "kubernetes.io/secret/85467a1d-15c5-42d0-b447-436e885feb2e-default-token-skxsg") on node "minikube" DevicePath "" May 26 02:35:13 minikube kubelet[3806]: W0526 02:35:13.930767 3806 pod_container_deletor.go:79] Container "5d7f57a0f4a42c217d4403ffb2cb90a108bce59920207f20d48069643379b93d" not found in pod's containers May 26 02:35:32 minikube kubelet[3806]: I0526 02:35:32.141868 3806 scope.go:95] [topologymanager] RemoveContainer - Container ID: 73fe995e0f41f93d771281ce0e079bccd6a4fe51ff00dadcf308fa61a5c039c5 May 26 02:36:00 minikube kubelet[3806]: I0526 02:36:00.623179 3806 topology_manager.go:187] [topologymanager] Topology Admit Handler May 26 02:36:02 minikube kubelet[3806]: W0526 02:36:02.046092 3806 pod_container_deletor.go:79] Container "113b214b0a2da3251cfca2194dbdcde79e7cb1b1cd7682e571bb8800822a6e1a" not found in pod's containers May 26 02:36:03 minikube kubelet[3806]: I0526 02:36:03.094228 3806 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9b788f8715521900c0b3225e63d680bcff9e397ba1f678dc105cd917a61d9461 May 26 02:36:03 minikube kubelet[3806]: I0526 02:36:03.341603 3806 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-skxsg" (UniqueName: "kubernetes.io/secret/9bf95fcd-e30e-446d-821d-8e18e2e60f11-default-token-skxsg") pod "9bf95fcd-e30e-446d-821d-8e18e2e60f11" (UID: "9bf95fcd-e30e-446d-821d-8e18e2e60f11") May 26 02:36:03 minikube kubelet[3806]: I0526 02:36:03.382336 3806 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bf95fcd-e30e-446d-821d-8e18e2e60f11-default-token-skxsg" (OuterVolumeSpecName: "default-token-skxsg") pod "9bf95fcd-e30e-446d-821d-8e18e2e60f11" (UID: "9bf95fcd-e30e-446d-821d-8e18e2e60f11"). InnerVolumeSpecName "default-token-skxsg". PluginName "kubernetes.io/secret", VolumeGidValue "" May 26 02:36:03 minikube kubelet[3806]: I0526 02:36:03.447926 3806 reconciler.go:319] Volume detached for volume "default-token-skxsg" (UniqueName: "kubernetes.io/secret/9bf95fcd-e30e-446d-821d-8e18e2e60f11-default-token-skxsg") on node "minikube" DevicePath "" May 26 02:36:04 minikube kubelet[3806]: W0526 02:36:04.144981 3806 pod_container_deletor.go:79] Container "113b214b0a2da3251cfca2194dbdcde79e7cb1b1cd7682e571bb8800822a6e1a" not found in pod's containers May 26 02:36:32 minikube kubelet[3806]: I0526 02:36:32.239462 3806 scope.go:95] [topologymanager] RemoveContainer - Container ID: 1bbcba9dff3127ba1033c4666207b0813cb0357880a6c5bd9d93cbcf7704c31e May 26 02:38:01 minikube kubelet[3806]: I0526 02:38:01.978090 3806 topology_manager.go:187] [topologymanager] Topology Admit Handler May 26 02:38:03 minikube kubelet[3806]: W0526 02:38:03.179814 3806 pod_container_deletor.go:79] Container "0ee74c773654874751731e7741712c5167bd3dbb114ab438b30b1da48a6c4a72" not found in pod's containers May 26 02:38:04 minikube kubelet[3806]: I0526 02:38:04.285606 3806 scope.go:95] [topologymanager] RemoveContainer - Container ID: 43da4fe4af91172b6856ec9dfe5956294c7048adea431cf9a55bf7231fb3ce6a May 26 02:38:04 minikube kubelet[3806]: I0526 02:38:04.421238 3806 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-skxsg" (UniqueName: "kubernetes.io/secret/984e79fe-5563-43a8-926f-681d504994ec-default-token-skxsg") pod "984e79fe-5563-43a8-926f-681d504994ec" (UID: "984e79fe-5563-43a8-926f-681d504994ec") May 26 02:38:04 minikube kubelet[3806]: I0526 02:38:04.432987 3806 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/984e79fe-5563-43a8-926f-681d504994ec-default-token-skxsg" (OuterVolumeSpecName: "default-token-skxsg") pod "984e79fe-5563-43a8-926f-681d504994ec" (UID: "984e79fe-5563-43a8-926f-681d504994ec"). InnerVolumeSpecName "default-token-skxsg". PluginName "kubernetes.io/secret", VolumeGidValue "" May 26 02:38:04 minikube kubelet[3806]: I0526 02:38:04.522875 3806 reconciler.go:319] Volume detached for volume "default-token-skxsg" (UniqueName: "kubernetes.io/secret/984e79fe-5563-43a8-926f-681d504994ec-default-token-skxsg") on node "minikube" DevicePath "" May 26 02:38:05 minikube kubelet[3806]: W0526 02:38:05.347332 3806 pod_container_deletor.go:79] Container "0ee74c773654874751731e7741712c5167bd3dbb114ab438b30b1da48a6c4a72" not found in pod's containers * * ==> kubernetes-dashboard [563cec3e4ab0] <== * 2021/05/26 02:03:27 Getting list config maps in the namespace default 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Incoming HTTP/1.1 GET /api/v1/persistentvolume?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/05/26 02:03:28 Getting list persistent volumes 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Incoming HTTP/1.1 GET /api/v1/secret/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/05/26 02:03:28 Getting list of secrets in &{[default]} namespace 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Incoming HTTP/1.1 GET /api/v1/storageclass?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/05/26 02:03:28 Getting list of storage classes in the cluster 2021/05/26 02:03:28 received 0 resources from sidecar instead of 10 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 received 0 resources from sidecar instead of 4 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 received 0 resources from sidecar instead of 10 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 Skipping metric because of error: Metric label not set. 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Incoming HTTP/1.1 GET /api/v1/clusterrole?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/05/26 02:03:28 Getting list of RBAC roles 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Incoming HTTP/1.1 GET /api/v1/clusterrolebinding?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Incoming HTTP/1.1 GET /api/v1/networkpolicy/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/05/26 02:03:28 Getting list of all clusterRoleBindings in the cluster 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Incoming HTTP/1.1 GET /api/v1/rolebinding/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/05/26 02:03:28 Getting list of all roleBindings in the cluster 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Incoming HTTP/1.1 GET /api/v1/namespace?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/05/26 02:03:28 Getting list of namespaces 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Incoming HTTP/1.1 GET /api/v1/node?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Incoming HTTP/1.1 GET /api/v1/serviceaccount/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Incoming HTTP/1.1 GET /api/v1/persistentvolume?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Incoming HTTP/1.1 GET /api/v1/role/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 Getting list persistent volumes 2021/05/26 02:03:28 Getting list of all roles in the cluster 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code 2021/05/26 02:03:28 [2021-05-26T02:03:28Z] Outcoming response to 127.0.0.1 with 200 status code * * ==> storage-provisioner [67956e74c157] <== * I0525 00:46:01.295079 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0525 00:46:31.299706 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout * * ==> storage-provisioner [c5d0e9cfe183] <== * I0525 00:46:31.840742 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0525 00:46:31.850045 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0525 00:46:31.850104 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0525 00:46:31.871270 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0525 00:46:31.871683 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_0fc001e0-7d43-448d-b3ff-8386da6da9a4! I0525 00:46:31.872475 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6da7e36f-84a1-4261-8c48-49ee4a76eb90", APIVersion:"v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_0fc001e0-7d43-448d-b3ff-8386da6da9a4 became leader I0525 00:46:31.972863 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_0fc001e0-7d43-448d-b3ff-8386da6da9a4! I0525 08:59:28.881823 1 request.go:655] Throttling request took 1.817429886s, request: PUT:https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath ```

ifconfig (note bridge100):

```shell lo0: flags=8049 mtu 16384 options=1203 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 nd6 options=201 gif0: flags=8010 mtu 1280 stf0: flags=0<> mtu 1280 XHC0: flags=0<> mtu 0 XHC1: flags=0<> mtu 0 XHC20: flags=0<> mtu 0 en0: flags=8863 mtu 1500 options=400 ether inet6 fe80::c59:12d7:d9bd:b9dc%en0 prefixlen 64 secured scopeid 0x8 inet 192.168.1.100 netmask 0xffffff00 broadcast 192.168.1.255 nd6 options=201 media: autoselect status: active en3: flags=8963 mtu 1500 options=460 ether media: autoselect status: inactive en1: flags=8963 mtu 1500 options=460 ether media: autoselect status: inactive en4: flags=8963 mtu 1500 options=460 ether media: autoselect status: inactive en2: flags=8963 mtu 1500 options=460 ether media: autoselect status: inactive bridge0: flags=8822 mtu 1500 options=63 ether Configuration: id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0 maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200 root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0 ipfilter disabled flags 0x0 member: en1 flags=3 ifmaxaddr 0 port 10 priority 0 path cost 0 member: en2 flags=3 ifmaxaddr 0 port 12 priority 0 path cost 0 member: en3 flags=3 ifmaxaddr 0 port 9 priority 0 path cost 0 member: en4 flags=3 ifmaxaddr 0 port 11 priority 0 path cost 0 media: status: inactive p2p0: flags=8843 mtu 2304 options=400 ether media: autoselect status: inactive awdl0: flags=8943 mtu 1484 options=400 ether 8a:5c:69:03:a6:8c inet6 fe80::885c:69ff:fe03:a68c%awdl0 prefixlen 64 scopeid 0xf nd6 options=201 media: autoselect status: active llw0: flags=8863 mtu 1500 options=400 ether 8a:5c:69:03:a6:8c inet6 fe80::885c:69ff:fe03:a68c%llw0 prefixlen 64 scopeid 0x10 nd6 options=201 media: autoselect status: active utun0: flags=8051 mtu 1380 inet6 fe80::6574:15f9:6502:bbfb%utun0 prefixlen 64 scopeid 0x11 nd6 options=201 utun1: flags=8051 mtu 2000 inet6 fe80::6819:a39e:388f:bde4%utun1 prefixlen 64 scopeid 0x12 nd6 options=201 en7: flags=8963 mtu 1500 ether ea:e2:ef:cb:c4:68 media: autoselect status: active bridge100: flags=8a63 mtu 1500 options=3 ether 7a:4f:43:25:96:64 inet 192.168.168.1 netmask 0xffffff00 broadcast 192.168.168.255 inet6 fe80::1c42:7ed0:eabe:c606%bridge100 prefixlen 64 secured scopeid 0x14 Configuration: id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0 maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200 root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0 ipfilter disabled flags 0x0 member: en7 flags=3 ifmaxaddr 0 port 19 priority 0 path cost 0 nd6 options=201 media: autoselect status: active utun2: flags=8051 mtu 1380 inet6 fe80::8aad:c5a0:40cd:d5a0%utun2 prefixlen 64 scopeid 0x15 nd6 options=201 utun3: flags=8051 mtu 1380 inet6 fe80::6b17:c30:395d:5615%utun3 prefixlen 64 scopeid 0x16 nd6 options=201 en6: flags=8863 mtu 1500 ether inet6 fe80::aede:48ff:fe00:1122%en6 prefixlen 64 scopeid 0x7 nd6 options=201 media: autoselect status: active ```

minikube ssh 'cat /etc/hosts'

127.0.0.1   localhost
127.0.1.1   minikube
192.168.64.1    host.minikube.internal
192.168.168.20  control-plane.minikube.internal

It seems like the root cause is that minikube simply has its own IP wrong.

All software is latest version:

> minikube version
minikube version: v1.20.0
commit: c61663e942ec43b20e8e70839dcca52e44cd85ae

> hyperkit -v
hyperkit: v0.20210107-2-g2f061e
sharifelgamal commented 3 years ago

This strikes me as a legitimate bug in our mounting code.

iamnoah commented 3 years ago

@sharifelgamal thanks for taking a look. Note that whatever is populating /etc/hosts also gets the IP wrong.

mcg1969 commented 3 years ago

I'm now seeing this as well. My bridge100 address is 192.168.60.1. So far I have tried modifying the Minikube /etc/hosts manually and passing --ip 192.168.60.1 to the mount command; neither have resolved the issue. Here's the resulting log. I am guessing the timeout is a red herring, and the bind: can't assign requested address issue earlier is a more direct symptom.

Log file created at: 2021/08/03 09:13:35
Running on machine: michaels
Binary: Built with gc go1.16.1 for darwin/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0803 09:13:35.437102    1384 out.go:291] Setting OutFile to fd 1 ...
I0803 09:13:35.437372    1384 out.go:343] isatty.IsTerminal(1) = true
I0803 09:13:35.437380    1384 out.go:304] Setting ErrFile to fd 2...
I0803 09:13:35.437385    1384 out.go:343] isatty.IsTerminal(2) = true
I0803 09:13:35.437465    1384 root.go:316] Updating PATH: /Users/mgrant/.minikube/bin
I0803 09:13:35.437721    1384 mustload.go:65] Loading cluster: minikube
I0803 09:13:35.438346    1384 main.go:128] libmachine: Found binary path at /Users/mgrant/.minikube/bin/docker-machine-driver-hyperkit
I0803 09:13:35.438393    1384 main.go:128] libmachine: Launching plugin server for driver hyperkit
I0803 09:13:35.449024    1384 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:50058
I0803 09:13:35.449541    1384 main.go:128] libmachine: () Calling .GetVersion
I0803 09:13:35.450017    1384 main.go:128] libmachine: Using API Version  1
I0803 09:13:35.450032    1384 main.go:128] libmachine: () Calling .SetConfigRaw
I0803 09:13:35.450351    1384 main.go:128] libmachine: () Calling .GetMachineName
I0803 09:13:35.450473    1384 main.go:128] libmachine: (minikube) Calling .GetState
I0803 09:13:35.450600    1384 main.go:128] libmachine: (minikube) DBG | exe=/Users/mgrant/.minikube/bin/docker-machine-driver-hyperkit uid=0
I0803 09:13:35.450723    1384 main.go:128] libmachine: (minikube) DBG | hyperkit pid from json: 1004
I0803 09:13:35.451800    1384 host.go:66] Checking if "minikube" exists ...
I0803 09:13:35.452164    1384 main.go:128] libmachine: Found binary path at /Users/mgrant/.minikube/bin/docker-machine-driver-hyperkit
I0803 09:13:35.452192    1384 main.go:128] libmachine: Launching plugin server for driver hyperkit
I0803 09:13:35.462532    1384 main.go:128] libmachine: Plugin server listening at address 127.0.0.1:50064
I0803 09:13:35.462964    1384 main.go:128] libmachine: () Calling .GetVersion
I0803 09:13:35.463369    1384 main.go:128] libmachine: Using API Version  1
I0803 09:13:35.463385    1384 main.go:128] libmachine: () Calling .SetConfigRaw
I0803 09:13:35.463656    1384 main.go:128] libmachine: () Calling .GetMachineName
I0803 09:13:35.463795    1384 main.go:128] libmachine: (minikube) Calling .DriverName
I0803 09:13:35.463912    1384 main.go:128] libmachine: (minikube) Calling .DriverName
I0803 09:13:35.464371    1384 main.go:128] libmachine: (minikube) Calling .DriverName
I0803 09:13:35.485707    1384 out.go:170] 📁  Mounting host path /Users/mgrant/Repos/anaconda-platform into VM as /Users/mgrant/Repos/anaconda-platform ...
I0803 09:13:35.506391    1384 out.go:170]     ▪ Mount type:   
I0803 09:13:35.527420    1384 out.go:170]     ▪ User ID:      docker
I0803 09:13:35.548325    1384 out.go:170]     ▪ Group ID:     docker
I0803 09:13:35.569339    1384 out.go:170]     ▪ Version:      9p2000.L
I0803 09:13:35.590349    1384 out.go:170]     ▪ Message Size: 262144
I0803 09:13:35.612431    1384 out.go:170]     ▪ Permissions:  755 (-rwxr-xr-x)
I0803 09:13:35.633340    1384 out.go:170]     ▪ Options:      map[]
I0803 09:13:35.654317    1384 out.go:170]     ▪ Bind Address: 192.160.60.1:50066
W0803 09:13:35.654482    1384 out.go:424] no arguments passed for "🚀  Userspace file server: " - returning raw string
I0803 09:13:35.654747    1384 ssh_runner.go:149] Run: /bin/bash -c "[ "x$(findmnt -T /Users/mgrant/Repos/anaconda-platform | grep /Users/mgrant/Repos/anaconda-platform)" != "x" ] && sudo umount -f /Users/mgrant/Repos/anaconda-platform || echo "
I0803 09:13:35.675306    1384 out.go:170] 🚀  Userspace file server: 
I0803 09:13:35.675430    1384 main.go:128] libmachine: (minikube) Calling .GetSSHHostname
I0803 09:13:35.675622    1384 main.go:114] stdlog: ufs.go:27 listen tcp 192.160.60.1:50066: bind: can't assign requested address
W0803 09:13:35.675646    1384 out.go:424] no arguments passed for "🛑  Userspace file server is shutdown\n" - returning raw string
W0803 09:13:35.675666    1384 out.go:424] no arguments passed for "🛑  Userspace file server is shutdown\n" - returning raw string
I0803 09:13:35.675765    1384 main.go:128] libmachine: (minikube) Calling .GetSSHPort
I0803 09:13:35.701386    1384 out.go:170] 🛑  Userspace file server is shutdown
I0803 09:13:35.701678    1384 main.go:128] libmachine: (minikube) Calling .GetSSHKeyPath
I0803 09:13:35.701857    1384 main.go:128] libmachine: (minikube) Calling .GetSSHUsername
I0803 09:13:35.702040    1384 sshutil.go:53] new ssh client: &{IP:192.168.60.13 Port:22 SSHKeyPath:/Users/mgrant/.minikube/machines/minikube/id_rsa Username:docker}
I0803 09:13:35.760554    1384 mount.go:147] unmount for /Users/mgrant/Repos/anaconda-platform ran successfully
I0803 09:13:35.760581    1384 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -m 755 -p /Users/mgrant/Repos/anaconda-platform"
I0803 09:13:35.766858    1384 ssh_runner.go:149] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=50066,trans=tcp,version=9p2000.L 192.160.60.1 /Users/mgrant/Repos/anaconda-platform"
I0803 09:14:07.690018    1384 ssh_runner.go:189] Completed: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=50066,trans=tcp,version=9p2000.L 192.160.60.1 /Users/mgrant/Repos/anaconda-platform": (31.922163087s)
I0803 09:14:07.711683    1384 out.go:170] 
W0803 09:14:07.711990    1384 out.go:235] ❌  Exiting due to GUEST_MOUNT: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=50066,trans=tcp,version=9p2000.L 192.160.60.1 /Users/mgrant/Repos/anaconda-platform" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=50066,trans=tcp,version=9p2000.L 192.160.60.1 /Users/mgrant/Repos/anaconda-platform": Process exited with status 32
stdout:

stderr:
mount: /Users/mgrant/Repos/anaconda-platform: mount(2) system call failed: Connection timed out.

W0803 09:14:07.712103    1384 out.go:424] no arguments passed for "\n" - returning raw string
W0803 09:14:07.712149    1384 out.go:235] 
W0803 09:14:07.714775    1384 out.go:424] no arguments passed for "😿  If the above advice does not help, please let us know:\n" - returning raw string
W0803 09:14:07.714800    1384 out.go:424] no arguments passed for "👉  https://github.com/kubernetes/minikube/issues/new/choose\n\n" - returning raw string
W0803 09:14:07.714806    1384 out.go:424] no arguments passed for "Please attach the following file to the GitHub issue:\n" - returning raw string
W0803 09:14:07.714929    1384 out.go:424] no arguments passed for "😿  If the above advice does not help, please let us know:\n👉  https://github.com/kubernetes/minikube/issues/new/choose\n\nPlease attach the following file to the GitHub issue:\n- /var/folders/zr/v087ks_x1dz1vr46g7jqb1880000gp/T/minikube_mount_ae8b53f7a951cd07b355402e8ec52c8ff7125fc3_0.log\n\n" - returning raw string
W0803 09:14:07.716276    1384 out.go:235] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
W0803 09:14:07.716293    1384 out.go:235] │                                                                                                                        │
W0803 09:14:07.716332    1384 out.go:235] │    😿  If the above advice does not help, please let us know:                                                          │
W0803 09:14:07.716345    1384 out.go:235] │    👉  https://github.com/kubernetes/minikube/issues/new/choose                                                        │
W0803 09:14:07.716351    1384 out.go:235] │                                                                                                                        │
W0803 09:14:07.716356    1384 out.go:235] │    Please attach the following file to the GitHub issue:                                                               │
W0803 09:14:07.716444    1384 out.go:235] │    - /var/folders/zr/v087ks_x1dz1vr46g7jqb1880000gp/T/minikube_mount_ae8b53f7a951cd07b355402e8ec52c8ff7125fc3_0.log    │
W0803 09:14:07.716451    1384 out.go:235] │                                                                                                                        │
W0803 09:14:07.716457    1384 out.go:235] ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
W0803 09:14:07.716649    1384 out.go:235] 
k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

blacksd commented 2 years ago

This bug is still present; whenever hyperkit is not selecting the default network of 192.168.64.0/24 the start and mount command behave incorrectly.

❯ ifconfig bridge100
bridge100: flags=8a63<UP,BROADCAST,SMART,RUNNING,ALLMULTI,SIMPLEX,MULTICAST> mtu 1500
    options=3<RXCSUM,TXCSUM>
    ether 16:7d:da:3c:ad:51
    inet 192.168.205.1 netmask 0xffffff00 broadcast 192.168.205.255
    inet6 fe80::18d6:2ca3:ade3:7b8f%bridge100 prefixlen 64 secured scopeid 0x11
    Configuration:
        id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
        maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
        root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
        ipfilter disabled flags 0x0
    member: en9 flags=3<LEARNING,DISCOVER>
            ifmaxaddr 0 port 16 priority 0 path cost 0
    Address cache:
        91:b:a7:0:13:16 Vlan1 en9 1052 flags=0<>
    nd6 options=201<PERFORMNUD,DAD>
    media: autoselect
    status: active

Mount fails with

Mount fail log ```shell Log file created at: 2022/02/28 18:01:51 Running on machine: Donnager Binary: Built with gc go1.17.6 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0228 18:01:51.115318 52760 out.go:297] Setting OutFile to fd 1 ... I0228 18:01:51.116371 52760 out.go:349] isatty.IsTerminal(1) = false I0228 18:01:51.116379 52760 out.go:310] Setting ErrFile to fd 2... I0228 18:01:51.116386 52760 out.go:349] isatty.IsTerminal(2) = false I0228 18:01:51.116598 52760 root.go:315] Updating PATH: /Users/marco/.minikube/bin I0228 18:01:51.118295 52760 mustload.go:65] Loading cluster: docker-only I0228 18:01:51.120809 52760 config.go:176] Loaded profile config "docker-only": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0 I0228 18:01:51.125240 52760 main.go:130] libmachine: Found binary path at /Users/marco/.minikube/bin/docker-machine-driver-hyperkit I0228 18:01:51.125536 52760 main.go:130] libmachine: Launching plugin server for driver hyperkit I0228 18:01:51.159749 52760 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:62090 I0228 18:01:51.160719 52760 main.go:130] libmachine: () Calling .GetVersion I0228 18:01:51.161417 52760 main.go:130] libmachine: Using API Version 1 I0228 18:01:51.161445 52760 main.go:130] libmachine: () Calling .SetConfigRaw I0228 18:01:51.161890 52760 main.go:130] libmachine: () Calling .GetMachineName I0228 18:01:51.162054 52760 main.go:130] libmachine: (docker-only) Calling .GetState I0228 18:01:51.162227 52760 main.go:130] libmachine: (docker-only) DBG | exe=/Users/marco/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0228 18:01:51.162733 52760 main.go:130] libmachine: (docker-only) DBG | hyperkit pid from json: 52461 I0228 18:01:51.165023 52760 host.go:66] Checking if "docker-only" exists ... I0228 18:01:51.166302 52760 main.go:130] libmachine: Found binary path at /Users/marco/.minikube/bin/docker-machine-driver-hyperkit I0228 18:01:51.166349 52760 main.go:130] libmachine: Launching plugin server for driver hyperkit I0228 18:01:51.213643 52760 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:62094 I0228 18:01:51.215823 52760 main.go:130] libmachine: () Calling .GetVersion I0228 18:01:51.217537 52760 main.go:130] libmachine: Using API Version 1 I0228 18:01:51.217602 52760 main.go:130] libmachine: () Calling .SetConfigRaw I0228 18:01:51.219142 52760 main.go:130] libmachine: () Calling .GetMachineName I0228 18:01:51.219620 52760 main.go:130] libmachine: (docker-only) Calling .DriverName I0228 18:01:51.219828 52760 main.go:130] libmachine: (docker-only) Calling .DriverName I0228 18:01:51.228000 52760 main.go:130] libmachine: (docker-only) Calling .DriverName I0228 18:01:51.249856 52760 out.go:176] * Mounting host path /Users into VM as /Users ... I0228 18:01:51.268707 52760 out.go:176] - Mount type: I0228 18:01:51.289201 52760 out.go:176] - User ID: docker I0228 18:01:51.309000 52760 out.go:176] - Group ID: docker I0228 18:01:51.327794 52760 out.go:176] - Version: 9p2000.L I0228 18:01:51.346831 52760 out.go:176] - Message Size: 262144 I0228 18:01:51.366608 52760 out.go:176] - Options: map[] I0228 18:01:51.384733 52760 out.go:176] - Bind Address: 192.168.64.1:62096 I0228 18:01:51.386193 52760 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /Users | grep /Users)" != "x" ] && sudo umount -f /Users || echo " I0228 18:01:51.402723 52760 main.go:130] libmachine: (docker-only) Calling .GetSSHHostname I0228 18:01:51.402755 52760 out.go:176] * Userspace file server: I0228 18:01:51.403012 52760 main.go:130] libmachine: (docker-only) Calling .GetSSHPort I0228 18:01:51.403169 52760 main.go:130] libmachine: (docker-only) Calling .GetSSHKeyPath I0228 18:01:51.403361 52760 main.go:130] libmachine: (docker-only) Calling .GetSSHUsername I0228 18:01:51.403550 52760 sshutil.go:53] new ssh client: &{IP:192.168.64.7 Port:22 SSHKeyPath:/Users/marco/.minikube/machines/docker-only/id_rsa Username:docker} I0228 18:01:51.459838 52760 mount.go:168] unmount for /Users ran successfully I0228 18:01:51.459873 52760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /Users" I0228 18:01:51.474273 52760 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=62096,trans=tcp,version=9p2000.L 192.168.64.1 /Users" I0228 18:01:51.512363 52760 mount.go:93] mount successful: "" I0228 18:01:51.531793 52760 out.go:176] * Successfully mounted /Users to /Users I0228 18:01:51.550021 52760 out.go:176] I0228 18:01:51.568691 52760 out.go:176] * NOTE: This process must stay alive for the mount to be accessible ... Log file created at: 2022/02/28 18:17:13 Running on machine: Donnager Binary: Built with gc go1.17.6 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0228 18:17:13.269174 58511 out.go:297] Setting OutFile to fd 1 ... I0228 18:17:13.271343 58511 out.go:349] isatty.IsTerminal(1) = false I0228 18:17:13.271352 58511 out.go:310] Setting ErrFile to fd 2... I0228 18:17:13.271359 58511 out.go:349] isatty.IsTerminal(2) = false I0228 18:17:13.271532 58511 root.go:315] Updating PATH: /Users/marco/.minikube/bin I0228 18:17:13.271576 58511 oci.go:561] shell is pointing to dockerd inside minikube. will unset to use host I0228 18:17:13.272291 58511 mustload.go:65] Loading cluster: docker-only I0228 18:17:13.272771 58511 config.go:176] Loaded profile config "docker-only": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0 I0228 18:17:13.273623 58511 main.go:130] libmachine: Found binary path at /Users/marco/.minikube/bin/docker-machine-driver-hyperkit I0228 18:17:13.273687 58511 main.go:130] libmachine: Launching plugin server for driver hyperkit I0228 18:17:13.296031 58511 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:62650 I0228 18:17:13.296920 58511 main.go:130] libmachine: () Calling .GetVersion I0228 18:17:13.297589 58511 main.go:130] libmachine: Using API Version 1 I0228 18:17:13.297611 58511 main.go:130] libmachine: () Calling .SetConfigRaw I0228 18:17:13.298079 58511 main.go:130] libmachine: () Calling .GetMachineName I0228 18:17:13.298536 58511 main.go:130] libmachine: (docker-only) Calling .GetState I0228 18:17:13.299013 58511 main.go:130] libmachine: (docker-only) DBG | exe=/Users/marco/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0228 18:17:13.299758 58511 main.go:130] libmachine: (docker-only) DBG | hyperkit pid from json: 58379 I0228 18:17:13.302960 58511 host.go:66] Checking if "docker-only" exists ... I0228 18:17:13.303443 58511 main.go:130] libmachine: Found binary path at /Users/marco/.minikube/bin/docker-machine-driver-hyperkit I0228 18:17:13.303483 58511 main.go:130] libmachine: Launching plugin server for driver hyperkit I0228 18:17:13.321174 58511 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:62654 I0228 18:17:13.321934 58511 main.go:130] libmachine: () Calling .GetVersion I0228 18:17:13.322435 58511 main.go:130] libmachine: Using API Version 1 I0228 18:17:13.322445 58511 main.go:130] libmachine: () Calling .SetConfigRaw I0228 18:17:13.322740 58511 main.go:130] libmachine: () Calling .GetMachineName I0228 18:17:13.322846 58511 main.go:130] libmachine: (docker-only) Calling .DriverName I0228 18:17:13.322969 58511 main.go:130] libmachine: (docker-only) Calling .DriverName I0228 18:17:13.324492 58511 main.go:130] libmachine: (docker-only) Calling .DriverName I0228 18:17:13.344076 58511 out.go:176] * Mounting host path /Users into VM as /Users ... I0228 18:17:13.362423 58511 out.go:176] - Mount type: I0228 18:17:13.383901 58511 out.go:176] - User ID: docker I0228 18:17:13.405011 58511 out.go:176] - Group ID: docker I0228 18:17:13.424267 58511 out.go:176] - Version: 9p2000.L I0228 18:17:13.444113 58511 out.go:176] - Message Size: 262144 I0228 18:17:13.464141 58511 out.go:176] - Options: map[] I0228 18:17:13.483592 58511 out.go:176] - Bind Address: 192.168.64.1:62656 I0228 18:17:13.483829 58511 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /Users | grep /Users)" != "x" ] && sudo umount -f /Users || echo " I0228 18:17:13.503310 58511 out.go:176] * Userspace file server: I0228 18:17:13.503394 58511 main.go:130] libmachine: (docker-only) Calling .GetSSHHostname I0228 18:17:13.503717 58511 main.go:130] libmachine: (docker-only) Calling .GetSSHPort I0228 18:17:13.503879 58511 main.go:130] libmachine: (docker-only) Calling .GetSSHKeyPath I0228 18:17:13.504017 58511 main.go:130] libmachine: (docker-only) Calling .GetSSHUsername I0228 18:17:13.504134 58511 sshutil.go:53] new ssh client: &{IP:192.168.64.7 Port:22 SSHKeyPath:/Users/marco/.minikube/machines/docker-only/id_rsa Username:docker} I0228 18:17:13.555882 58511 mount.go:168] unmount for /Users ran successfully I0228 18:17:13.555902 58511 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /Users" I0228 18:17:13.565878 58511 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=62656,trans=tcp,version=9p2000.L 192.168.64.1 /Users" I0228 18:17:13.592771 58511 mount.go:93] mount successful: "" I0228 18:17:13.613179 58511 out.go:176] * Successfully mounted /Users to /Users I0228 18:17:13.632026 58511 out.go:176] I0228 18:17:13.653335 58511 out.go:176] * NOTE: This process must stay alive for the mount to be accessible ... Log file created at: 2022/03/02 14:41:49 Running on machine: Donnager Binary: Built with gc go1.17.6 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0302 14:41:49.619034 33730 out.go:297] Setting OutFile to fd 1 ... I0302 14:41:49.620512 33730 out.go:349] isatty.IsTerminal(1) = false I0302 14:41:49.620524 33730 out.go:310] Setting ErrFile to fd 2... I0302 14:41:49.620541 33730 out.go:349] isatty.IsTerminal(2) = false I0302 14:41:49.621075 33730 root.go:315] Updating PATH: /Users/marco/.minikube/bin I0302 14:41:49.626050 33730 mustload.go:65] Loading cluster: docker-only I0302 14:41:49.626863 33730 config.go:176] Loaded profile config "docker-only": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0 I0302 14:41:49.629865 33730 main.go:130] libmachine: Found binary path at /Users/marco/.minikube/bin/docker-machine-driver-hyperkit I0302 14:41:49.630036 33730 main.go:130] libmachine: Launching plugin server for driver hyperkit I0302 14:41:49.657387 33730 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:53456 I0302 14:41:49.659045 33730 main.go:130] libmachine: () Calling .GetVersion I0302 14:41:49.660466 33730 main.go:130] libmachine: Using API Version 1 I0302 14:41:49.660504 33730 main.go:130] libmachine: () Calling .SetConfigRaw I0302 14:41:49.661926 33730 main.go:130] libmachine: () Calling .GetMachineName I0302 14:41:49.662731 33730 main.go:130] libmachine: (docker-only) Calling .GetState I0302 14:41:49.663005 33730 main.go:130] libmachine: (docker-only) DBG | exe=/Users/marco/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0302 14:41:49.664301 33730 main.go:130] libmachine: (docker-only) DBG | hyperkit pid from json: 33709 I0302 14:41:49.668386 33730 host.go:66] Checking if "docker-only" exists ... I0302 14:41:49.669753 33730 main.go:130] libmachine: Found binary path at /Users/marco/.minikube/bin/docker-machine-driver-hyperkit I0302 14:41:49.669853 33730 main.go:130] libmachine: Launching plugin server for driver hyperkit I0302 14:41:49.700872 33730 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:53461 I0302 14:41:49.702669 33730 main.go:130] libmachine: () Calling .GetVersion I0302 14:41:49.703484 33730 main.go:130] libmachine: Using API Version 1 I0302 14:41:49.703498 33730 main.go:130] libmachine: () Calling .SetConfigRaw I0302 14:41:49.704090 33730 main.go:130] libmachine: () Calling .GetMachineName I0302 14:41:49.704338 33730 main.go:130] libmachine: (docker-only) Calling .DriverName I0302 14:41:49.704505 33730 main.go:130] libmachine: (docker-only) Calling .DriverName I0302 14:41:49.709920 33730 main.go:130] libmachine: (docker-only) Calling .DriverName I0302 14:41:49.732416 33730 out.go:176] * Mounting host path /Users into VM as /Users ... I0302 14:41:49.752803 33730 out.go:176] - Mount type: I0302 14:41:49.773519 33730 out.go:176] - User ID: docker I0302 14:41:49.792453 33730 out.go:176] - Group ID: docker I0302 14:41:49.811807 33730 out.go:176] - Version: 9p2000.L I0302 14:41:49.830911 33730 out.go:176] - Message Size: 262144 I0302 14:41:49.848933 33730 out.go:176] - Options: map[] I0302 14:41:49.868968 33730 out.go:176] - Bind Address: 192.168.64.1:53463 I0302 14:41:49.870954 33730 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /Users | grep /Users)" != "x" ] && sudo umount -f /Users || echo " I0302 14:41:49.888933 33730 main.go:130] libmachine: (docker-only) Calling .GetSSHHostname I0302 14:41:49.889195 33730 out.go:176] * Userspace file server: I0302 14:41:49.889205 33730 main.go:130] libmachine: (docker-only) Calling .GetSSHPort I0302 14:41:49.889314 33730 main.go:130] libmachine: (docker-only) Calling .GetSSHKeyPath I0302 14:41:49.889416 33730 main.go:130] libmachine: (docker-only) Calling .GetSSHUsername I0302 14:41:49.889613 33730 sshutil.go:53] new ssh client: &{IP:192.168.205.7 Port:22 SSHKeyPath:/Users/marco/.minikube/machines/docker-only/id_rsa Username:docker} I0302 14:41:49.891992 33730 main.go:116] stdlog: ufs.go:27 listen tcp 192.168.64.1:53463: bind: can't assign requested address I0302 14:41:49.910251 33730 out.go:176] * Userspace file server is shutdown I0302 14:41:49.949305 33730 mount.go:168] unmount for /Users ran successfully I0302 14:41:49.949328 33730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /Users" I0302 14:41:49.964391 33730 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=53463,trans=tcp,version=9p2000.L 192.168.64.1 /Users" I0302 14:42:21.912736 33730 ssh_runner.go:235] Completed: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=53463,trans=tcp,version=9p2000.L 192.168.64.1 /Users": (31.949117183s) I0302 14:42:21.937514 33730 out.go:176] W0302 14:42:21.937798 33730 out.go:241] X Exiting due to GUEST_MOUNT_COULD_NOT_CONNECT: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=53463,trans=tcp,version=9p2000.L 192.168.64.1 /Users": Process exited with status 32 stdout: stderr: mount: /Users: mount(2) system call failed: Connection timed out. W0302 14:42:21.938829 33730 out.go:241] * Suggestion: If the host has a firewall: 1. Allow a port through the firewall 2. Specify "--port=" for "minikube mount" W0302 14:42:21.938840 33730 out.go:241] * W0302 14:42:21.943884 33730 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ * Please also attach the following file to the GitHub issue: │ │ * - /var/folders/hh/6v7vpsvd71g4hy4gmr61gz6h0000gn/T/minikube_mount_4de640a84fb0a5c6645bc7b68839c9f1a14e1ee4_0.log │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ Log file created at: 2022/03/02 15:55:31 Running on machine: Donnager Binary: Built with gc go1.17.6 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0302 15:55:31.788799 65656 out.go:297] Setting OutFile to fd 1 ... I0302 15:55:31.790577 65656 out.go:349] isatty.IsTerminal(1) = true I0302 15:55:31.790582 65656 out.go:310] Setting ErrFile to fd 2... I0302 15:55:31.790587 65656 out.go:349] isatty.IsTerminal(2) = true I0302 15:55:31.790814 65656 root.go:315] Updating PATH: /Users/marco/.minikube/bin I0302 15:55:31.793528 65656 mustload.go:65] Loading cluster: docker-only I0302 15:55:31.814082 65656 out.go:176] 🤷 Profile "docker-only" not found. Run "minikube profile list" to view all profiles. Log file created at: 2022/03/02 16:54:10 Running on machine: Donnager Binary: Built with gc go1.17.6 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0302 16:54:10.151336 81211 out.go:297] Setting OutFile to fd 1 ... I0302 16:54:10.151996 81211 out.go:349] isatty.IsTerminal(1) = false I0302 16:54:10.152002 81211 out.go:310] Setting ErrFile to fd 2... I0302 16:54:10.152006 81211 out.go:349] isatty.IsTerminal(2) = false I0302 16:54:10.152135 81211 root.go:315] Updating PATH: /Users/marco/.minikube/bin I0302 16:54:10.153200 81211 mustload.go:65] Loading cluster: docker-only I0302 16:54:10.153605 81211 config.go:176] Loaded profile config "docker-only": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0 I0302 16:54:10.154226 81211 main.go:130] libmachine: Found binary path at /Users/marco/.minikube/bin/docker-machine-driver-hyperkit I0302 16:54:10.154346 81211 main.go:130] libmachine: Launching plugin server for driver hyperkit I0302 16:54:10.168041 81211 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:55836 I0302 16:54:10.168907 81211 main.go:130] libmachine: () Calling .GetVersion I0302 16:54:10.169634 81211 main.go:130] libmachine: Using API Version 1 I0302 16:54:10.169660 81211 main.go:130] libmachine: () Calling .SetConfigRaw I0302 16:54:10.170007 81211 main.go:130] libmachine: () Calling .GetMachineName I0302 16:54:10.170143 81211 main.go:130] libmachine: (docker-only) Calling .GetState I0302 16:54:10.170291 81211 main.go:130] libmachine: (docker-only) DBG | exe=/Users/marco/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0302 16:54:10.170425 81211 main.go:130] libmachine: (docker-only) DBG | hyperkit pid from json: 74261 I0302 16:54:10.172907 81211 host.go:66] Checking if "docker-only" exists ... I0302 16:54:10.173375 81211 main.go:130] libmachine: Found binary path at /Users/marco/.minikube/bin/docker-machine-driver-hyperkit I0302 16:54:10.173491 81211 main.go:130] libmachine: Launching plugin server for driver hyperkit I0302 16:54:10.198111 81211 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:55838 I0302 16:54:10.199399 81211 main.go:130] libmachine: () Calling .GetVersion I0302 16:54:10.200282 81211 main.go:130] libmachine: Using API Version 1 I0302 16:54:10.200296 81211 main.go:130] libmachine: () Calling .SetConfigRaw I0302 16:54:10.201816 81211 main.go:130] libmachine: () Calling .GetMachineName I0302 16:54:10.203061 81211 main.go:130] libmachine: (docker-only) Calling .DriverName I0302 16:54:10.204322 81211 main.go:130] libmachine: (docker-only) Calling .DriverName I0302 16:54:10.208813 81211 main.go:130] libmachine: (docker-only) Calling .DriverName I0302 16:54:10.230213 81211 out.go:176] * Mounting host path /Users into VM as /Users ... I0302 16:54:10.250059 81211 out.go:176] - Mount type: I0302 16:54:10.269946 81211 out.go:176] - User ID: docker I0302 16:54:10.289123 81211 out.go:176] - Group ID: docker I0302 16:54:10.307893 81211 out.go:176] - Version: 9p2000.L I0302 16:54:10.310742 81211 out.go:176] - Message Size: 262144 I0302 16:54:10.328940 81211 out.go:176] - Options: map[] I0302 16:54:10.346956 81211 out.go:176] - Bind Address: 192.168.64.1:55840 I0302 16:54:10.347977 81211 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /Users | grep /Users)" != "x" ] && sudo umount -f /Users || echo " I0302 16:54:10.366154 81211 out.go:176] * Userspace file server: I0302 16:54:10.366230 81211 main.go:130] libmachine: (docker-only) Calling .GetSSHHostname I0302 16:54:10.366462 81211 main.go:130] libmachine: (docker-only) Calling .GetSSHPort I0302 16:54:10.366648 81211 main.go:130] libmachine: (docker-only) Calling .GetSSHKeyPath I0302 16:54:10.366845 81211 main.go:130] libmachine: (docker-only) Calling .GetSSHUsername I0302 16:54:10.367046 81211 sshutil.go:53] new ssh client: &{IP:192.168.205.8 Port:22 SSHKeyPath:/Users/marco/.minikube/machines/docker-only/id_rsa Username:docker} I0302 16:54:10.367068 81211 main.go:116] stdlog: ufs.go:27 listen tcp 192.168.64.1:55840: bind: can't assign requested address I0302 16:54:10.385076 81211 out.go:176] * Userspace file server is shutdown I0302 16:54:10.413831 81211 mount.go:168] unmount for /Users ran successfully I0302 16:54:10.413853 81211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /Users" I0302 16:54:10.430829 81211 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=55840,trans=tcp,version=9p2000.L 192.168.64.1 /Users" Log file created at: 2022/03/02 16:54:29 Running on machine: Donnager Binary: Built with gc go1.17.6 for darwin/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0302 16:54:29.524718 81246 out.go:297] Setting OutFile to fd 1 ... I0302 16:54:29.525076 81246 out.go:349] isatty.IsTerminal(1) = true I0302 16:54:29.525079 81246 out.go:310] Setting ErrFile to fd 2... I0302 16:54:29.525083 81246 out.go:349] isatty.IsTerminal(2) = true I0302 16:54:29.525173 81246 root.go:315] Updating PATH: /Users/marco/.minikube/bin I0302 16:54:29.525464 81246 mustload.go:65] Loading cluster: docker-only I0302 16:54:29.525788 81246 config.go:176] Loaded profile config "docker-only": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0 I0302 16:54:29.526170 81246 main.go:130] libmachine: Found binary path at /Users/marco/.minikube/bin/docker-machine-driver-hyperkit I0302 16:54:29.526224 81246 main.go:130] libmachine: Launching plugin server for driver hyperkit I0302 16:54:29.538154 81246 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:55854 I0302 16:54:29.538941 81246 main.go:130] libmachine: () Calling .GetVersion I0302 16:54:29.539578 81246 main.go:130] libmachine: Using API Version 1 I0302 16:54:29.539589 81246 main.go:130] libmachine: () Calling .SetConfigRaw I0302 16:54:29.539968 81246 main.go:130] libmachine: () Calling .GetMachineName I0302 16:54:29.540099 81246 main.go:130] libmachine: (docker-only) Calling .GetState I0302 16:54:29.540220 81246 main.go:130] libmachine: (docker-only) DBG | exe=/Users/marco/.minikube/bin/docker-machine-driver-hyperkit uid=0 I0302 16:54:29.540334 81246 main.go:130] libmachine: (docker-only) DBG | hyperkit pid from json: 74261 I0302 16:54:29.542480 81246 host.go:66] Checking if "docker-only" exists ... I0302 16:54:29.542898 81246 main.go:130] libmachine: Found binary path at /Users/marco/.minikube/bin/docker-machine-driver-hyperkit I0302 16:54:29.542926 81246 main.go:130] libmachine: Launching plugin server for driver hyperkit I0302 16:54:29.554588 81246 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:55856 I0302 16:54:29.555250 81246 main.go:130] libmachine: () Calling .GetVersion I0302 16:54:29.555693 81246 main.go:130] libmachine: Using API Version 1 I0302 16:54:29.555705 81246 main.go:130] libmachine: () Calling .SetConfigRaw I0302 16:54:29.556078 81246 main.go:130] libmachine: () Calling .GetMachineName I0302 16:54:29.556249 81246 main.go:130] libmachine: (docker-only) Calling .DriverName I0302 16:54:29.556366 81246 main.go:130] libmachine: (docker-only) Calling .DriverName I0302 16:54:29.557799 81246 main.go:130] libmachine: (docker-only) Calling .DriverName I0302 16:54:29.577872 81246 out.go:176] 📁 Mounting host path /Users into VM as /Users ... I0302 16:54:29.597825 81246 out.go:176] ▪ Mount type: I0302 16:54:29.618272 81246 out.go:176] ▪ User ID: docker I0302 16:54:29.638811 81246 out.go:176] ▪ Group ID: docker I0302 16:54:29.658035 81246 out.go:176] ▪ Version: 9p2000.L I0302 16:54:29.676873 81246 out.go:176] ▪ Message Size: 262144 I0302 16:54:29.700085 81246 out.go:176] ▪ Options: map[] I0302 16:54:29.718916 81246 out.go:176] ▪ Bind Address: 192.168.64.1:55858 I0302 16:54:29.719895 81246 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /Users | grep /Users)" != "x" ] && sudo umount -f /Users || echo " I0302 16:54:29.738369 81246 out.go:176] 🚀 Userspace file server: I0302 16:54:29.738420 81246 main.go:130] libmachine: (docker-only) Calling .GetSSHHostname I0302 16:54:29.738774 81246 main.go:130] libmachine: (docker-only) Calling .GetSSHPort I0302 16:54:29.738811 81246 main.go:116] stdlog: ufs.go:27 listen tcp 192.168.64.1:55858: bind: can't assign requested address I0302 16:54:29.738967 81246 main.go:130] libmachine: (docker-only) Calling .GetSSHKeyPath I0302 16:54:29.756848 81246 out.go:176] 🛑 Userspace file server is shutdown I0302 16:54:29.757313 81246 main.go:130] libmachine: (docker-only) Calling .GetSSHUsername I0302 16:54:29.757508 81246 sshutil.go:53] new ssh client: &{IP:192.168.205.8 Port:22 SSHKeyPath:/Users/marco/.minikube/machines/docker-only/id_rsa Username:docker} I0302 16:54:29.814534 81246 mount.go:168] unmount for /Users ran successfully I0302 16:54:29.814584 81246 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /Users" I0302 16:54:29.826016 81246 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=55858,trans=tcp,version=9p2000.L 192.168.64.1 /Users" I0302 16:54:42.368040 81211 ssh_runner.go:235] Completed: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=55840,trans=tcp,version=9p2000.L 192.168.64.1 /Users": (31.937033464s) I0302 16:54:42.394154 81211 out.go:176] W0302 16:54:42.394913 81211 out.go:241] X Exiting due to GUEST_MOUNT_COULD_NOT_CONNECT: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=55840,trans=tcp,version=9p2000.L 192.168.64.1 /Users": Process exited with status 32 stdout: stderr: mount: /Users: mount(2) system call failed: Connection timed out. W0302 16:54:42.396049 81211 out.go:241] * Suggestion: If the host has a firewall: 1. Allow a port through the firewall 2. Specify "--port=" for "minikube mount" W0302 16:54:42.396083 81211 out.go:241] * W0302 16:54:42.401956 81211 out.go:241] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ * If the above advice does not help, please let us know: │ │ https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ * Please also attach the following file to the GitHub issue: │ │ * - /var/folders/hh/6v7vpsvd71g4hy4gmr61gz6h0000gn/T/minikube_mount_4de640a84fb0a5c6645bc7b68839c9f1a14e1ee4_0.log │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ I0302 16:55:01.824282 81246 ssh_runner.go:235] Completed: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=55858,trans=tcp,version=9p2000.L 192.168.64.1 /Users": (31.998141526s) I0302 16:55:01.845905 81246 out.go:176] W0302 16:55:01.846448 81246 out.go:241] ❌ Exiting due to GUEST_MOUNT_COULD_NOT_CONNECT: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=55858,trans=tcp,version=9p2000.L 192.168.64.1 /Users": Process exited with status 32 stdout: stderr: mount: /Users: mount(2) system call failed: Connection timed out. W0302 16:55:01.847237 81246 out.go:241] 💡 Suggestion: If the host has a firewall: 1. Allow a port through the firewall 2. Specify "--port=" for "minikube mount" W0302 16:55:01.847393 81246 out.go:241] W0302 16:55:01.855730 81246 out.go:241] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ 😿 If the above advice does not help, please let us know: │ │ 👉 https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │ │ Please also attach the following file to the GitHub issue: │ │ - /var/folders/hh/6v7vpsvd71g4hy4gmr61gz6h0000gn/T/minikube_mount_4de640a84fb0a5c6645bc7b68839c9f1a14e1ee4_0.log │ │ │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ```

Please note how it's referencing the 192.168.84.1 here too, but that CIDR it's not configured anywhere - and I'd really like to avoid passing the IP as mentioned in the probably related #12729

spowelljr commented 2 years ago

Hi @blacksd, I'm curious what version of minikube you're using. I fixed up a lot of the mounting code in December, so if you're on an older version this could possibly have been fixed.

blacksd commented 2 years ago

@spowelljr I checked just now, it's v1.25.2.

❯ minikube version
minikube version: v1.25.2
commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7

This is my startup routine:

❯ declare -f minikube_docker_start
minikube_docker_start () {
        export KUBECONFIG="/Users/marco/.kube/config_minikube" 
        local -r _minikube_docker_profile="docker-only" 
        if [ ! -s "~/.minikube/profiles/${_minikube_docker_profile}/config.json" ]
        then
                rm -f ~/.minikube/config/config.json 2> /dev/null
                minikube config set profile docker-only
                minikube config set driver hyperkit
                minikube config set cpus 2
                minikube config set memory 2G
                minikube config set disk-size 10G
                minikube config set container-runtime docker
                minikube config set kubernetes-version 0.0.0
        fi
        minikube start --mount --mount-string /Users:/Users
}
❯ minikube_docker_start
❗  These changes will take effect upon a minikube delete and then a minikube start
❗  These changes will take effect upon a minikube delete and then a minikube start
❗  These changes will take effect upon a minikube delete and then a minikube start
❗  These changes will take effect upon a minikube delete and then a minikube start
❗  These changes will take effect upon a minikube delete and then a minikube start
😄  [docker-only] minikube v1.25.2 on Darwin 11.6.4
    ▪ KUBECONFIG=/Users/marco/.kube/config_minikube
✨  Using the hyperkit driver based on user configuration
👍  Starting minikube without Kubernetes docker-only in cluster docker-only
🔥  Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=10240MB) ...
📁  Creating mount /Users:/Users ...
🏄  Done! minikube is ready without Kubernetes!
╭───────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                       │
│                       💡  Things to try without Kubernetes ...                        │
│                                                                                       │
│    - "minikube ssh" to SSH into minikube's node.                                      │
│    - "minikube docker-env" to point your docker-cli to the docker inside minikube.    │
│    - "minikube image" to build images without docker.                                 │
│                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────╯
❯ minikube ip
192.168.205.12
❯ minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ mount
tmpfs on / type tmpfs (rw,relatime,size=1830292k)
devtmpfs on /dev type devtmpfs (rw,relatime,size=900288k,nr_inodes=225072,mode=755)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,size=406732k,nr_inodes=819200,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,size=4096k,nr_inodes=1024,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,nr_inodes=409600)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
/dev/vda1 on /mnt/vda1 type ext4 (rw,relatime)
/dev/vda1 on /var/lib/boot2docker type ext4 (rw,relatime)
/dev/vda1 on /var/lib/docker type ext4 (rw,relatime)
/dev/vda1 on /var/lib/containerd type ext4 (rw,relatime)
/dev/vda1 on /var/lib/buildkit type ext4 (rw,relatime)
/dev/vda1 on /var/lib/containers type ext4 (rw,relatime)
/dev/vda1 on /var/log type ext4 (rw,relatime)
/dev/vda1 on /var/tmp type ext4 (rw,relatime)
/dev/vda1 on /var/lib/kubelet type ext4 (rw,relatime)
/dev/vda1 on /var/lib/cni type ext4 (rw,relatime)
/dev/vda1 on /data type ext4 (rw,relatime)
/dev/vda1 on /tmp/hostpath_pv type ext4 (rw,relatime)
/dev/vda1 on /tmp/hostpath-provisioner type ext4 (rw,relatime)
/dev/vda1 on /var/lib/minikube type ext4 (rw,relatime)
/dev/vda1 on /var/lib/toolbox type ext4 (rw,relatime)
/dev/vda1 on /var/lib/minishift type ext4 (rw,relatime)
$ ls /Users/
$

This works just fine if the minikube ip is in the 192.168.84.x CIDR.

spowelljr commented 2 years ago

Do you have Internet Sharing enabled? System Preferences -> Sharing

I'm wondering if this is what's causing a different inet

blacksd commented 2 years ago

Do you have Internet Sharing enabled? System Preferences -> Sharing

I'm wondering if this is what's causing a different inet

@spowelljr no, I don't have it running.

image

Also, this persists after the upgrade from Big Sur to Monterey. Like I mentioned in the other comment, I think the different inet started appearing after testing out other hyperkit-based implementations like colima.

tsjk commented 2 years ago

minikube mount has the --ip parameter for this, no? So, you could do (assuming bash) minikube mount --ip $(ggrep -oP '(?<=\sinet\s)[0-9\.]+(?=\/)' < <(ip address show bridge100 2> /dev/null)) ... where ip is from iproute2 and ggrep is GNU grep.

blacksd commented 2 years ago

@tsjk that would probably do it, but I don't think it's something we can require from a user (installing GNU grep, detecting the actual bridge device...)

tsjk commented 2 years ago

@blacksd Ok. I do exactly this, however. I agree that it's not something I'd expect from the average user, but then again - would the average user have an altered dhcpd scope? Sometimes it's easier to educate the user than giving confusing advice. E.g. "Disable your VPN", which the more knowledgeable user needs spend time on decoding to "If you have something routable on the same network address as is chosen by the vm dhcpd you have to do something extra." Best of all, however, may be for minikube mount to detect the bridge device. Given the machine instance it'd have all the information to set parameters right.

blacksd commented 2 years ago

@tsjk totally agree on the advanced detection performed by minikube mount - that would make the "issue" effectively transparent. I'd be happy to contribute to this. Also, as I pointed out a few times in the previous comments, I see this case getting more and more recurring as developers are shifting from Docker Desktop to different Hyperkit-based solutions (evaluating different solutions will get you into this).

nrb commented 2 years ago

I agree that minikube should detect this, but a work around I've found for discovering the host IP without messing with the Hyperkit bridges is:

ip=$(minikube ssh -- cat /etc/resolv.conf | awk -F' ' '/^nameserver/ { print $2}' | tr -d '\r')
minikube mount --ip ${ip} <your options>

Using nslookup to get the IP within the minikube VM fails because there's not a valid search server. It'll get you the IP, but also return an error code of 1. Reading resolv.conf within the minikube VM seems to be the most consistent method I've found, as /etc/hosts still lists 192.168.64.1 host.minikube.internal for me, despite my subnet being 192.168.205.

tsjk commented 2 years ago

That's nice. But you might as well ask systemd for the gateway if you want to do it from within the VM. Like

ip="`minikube ssh -- '/usr/bin/networkctl --no-pager status | awk -F '\'' '\'' '\''/^\s*Gateway:\s/ { print $2 }'\'''`"

That has the upside of assuming nothing about the host environment except being able to run an sh statement.

sharifelgamal commented 2 years ago

We'd be happy to review a PR that detects this proactively.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

CtrlZvi commented 1 year ago

This is more far reaching than just mounts. It also ruins DNS resolution of host.minikube.internal. /remove-lifecycle rotten

CtrlZvi commented 1 year ago

Adding information in case someone has more time than me to work on a PR:

tsjk commented 1 year ago

I think the interface is created by com.apple.vmnet, and IP addressing can be configured, in /Library/Preferences/SystemConfiguration/com.apple.vmnet.plist. The IP address of interface can also change if the previous address is indicated as occupied in /var/db/dhcpd_leases (I for one always clear that file before starting minikube).

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

medyagh commented 1 year ago

this issue must be fixed by this PR https://github.com/kubernetes/minikube/pull/15720 could anyone who has this problem try this binary and confirm it is fixed ? https://storage.googleapis.com/minikube-builds/15720/minikube-darwin-amd64

This PR will be included in minikube 1.29.1

tsjk commented 1 year ago

I'm not using minikube at the moment. The mac mini I used it on is rather old now. I got tired of the extra work needed for keeping it on a supported macOS version, and switched to Debian on it.

blacksd commented 1 year ago

@medyagh unfortunately the issue persist with the linked build:


❯ ./minikube-darwin-amd64 version
minikube version: v1.29.0
commit: 54702ac2ae098681775f2fb951d03d55676a1ca1
❯ rm -f ~/.minikube/config/config.json 2> /dev/null
minikube config set profile docker-only
minikube config set driver hyperkit
minikube config set cpus 2
minikube config set memory 2G
minikube config set disk-size 10G
minikube config set container-runtime docker
minikube config set kubernetes-version 0.0.0

❗  These changes will take effect upon a minikube delete and then a minikube start
❗  These changes will take effect upon a minikube delete and then a minikube start
❗  These changes will take effect upon a minikube delete and then a minikube start
❗  These changes will take effect upon a minikube delete and then a minikube start
❗  These changes will take effect upon a minikube delete and then a minikube start
❯ ./minikube-darwin-amd64 delete
🔥  Deleting "docker-only" in hyperkit ...
💀  Removed all traces of the "docker-only" cluster.
❯ ./minikube-darwin-amd64 start --mount --mount-string /Users:/Users
😄  minikube v1.29.0 on Darwin 13.2
    ▪ KUBECONFIG=/Users/marco/.kube/config_minikube
✨  Using the hyperkit driver based on user configuration
👍  Starting minikube without Kubernetes in cluster minikube
🔥  Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=10240MB) ...
🐳  Preparing Docker 20.10.23 ...
📁  Creating mount /Users:/Users ...
🏄  Done! minikube is ready without Kubernetes!
╭───────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                       │
│                       💡  Things to try without Kubernetes ...                        │
│                                                                                       │
│    - "minikube ssh" to SSH into minikube's node.                                      │
│    - "minikube docker-env" to point your docker-cli to the docker inside minikube.    │
│    - "minikube image" to build images without docker.                                 │
│                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────╯
❯ ./minikube-darwin-amd64 ip
192.168.105.5
❯ ./minikube-darwin-amd64 ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ mount | grep Users
$ ls /Users/
$ logout
p2c2e commented 1 year ago

@blacksd - I was the one who provided the fix. Want to validate a few things.

  1. Are you on Ventura? When you downloaded the binary from the link and executed it, were you prompted to "Allow" incoming connections by the firewall? Firewall permissions could be the problem - and Ventura made things worse... To double check this, try the following command the paste the results (Wait for the timeout to happen; don't kill it)
    • First start a profile (pref without the mount strings)
    • Use ./minikube-darwin-amd64 mount /Users:/Users -v7 and see if this shows the 192.168.105.1: in the log messages (this is the fix). And if it finally times out, then there is a firewall issue In fact, you may see a message like below:
      
      mount: /Users: mount(2) system call failed: Connection timed out.

💡 Suggestion:

If the host has a firewall:

1. Allow a port through the firewall
2. Specify "--port=<port_number>" for "minikube mount"

2. Unrelated to the issue: I see that you tried to delete the config.json and set some defaults. But, I hope you were not trying to launch the 'docker-only' profile. The started profile was default 'minikube'. I hope this was intentional.
k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/minikube/issues/11510#issuecomment-1479786075): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.