kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
28.91k stars 4.83k forks source link

tunnel with Docker driver on wsl2: Recv failure: Connection reset by peer #9482

Closed magnus-larsson closed 3 years ago

magnus-larsson commented 3 years ago

Summary

The minikube tunnel command seems to open the expected port on localhost, but making a request fails with Connection reset by peer.

Environment

Steps to reproduce the issue:

  1. Create minikube instance
    version=v1.19.2
    driver=docker
    minikube start --kubernetes-version=$version --driver=$driver
  2. Create a deployment and a load balanced service
    kubectl create deployment balanced --image=k8s.gcr.io/echoserver:1.4  
    kubectl expose deployment balanced --type=LoadBalancer --port=8000
  3. Verify the the port is not yet open (command returns nothing)
    netstat -lntu | grep 127.0.0.1:8000
  4. Start tunnel
    minikube tunnel --alsologtostderr --v=1
  5. In another terminal, check external IP of the service (command returns EXTERNAL-IP = 127.0.0.1)
    kubectl get svc balanced
  6. Verify that the port is open (command returns LISTEN)
    netstat -lntu | grep 127.0.0.1:8000
  7. Send a request (command fails with curl: (56) Recv failure: Connection reset by peer)
    curl http://127.0.0.1:8000

    Full output of failed command: Output from curl http://127.0.0.1:8000:

curl: (56) Recv failure: Connection reset by peer

Output from minikube tunnel:

I1017 09:56:13.989002 14220 mustload.go:66] Loading cluster: minikube I1017 09:56:13.991857 14220 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1017 09:56:14.678255 14220 host.go:65] Checking if "minikube" exists ... I1017 09:56:14.678703 14220 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I1017 09:56:15.265915 14220 api_server.go:146] Checking apiserver status ... I1017 09:56:15.266024 14220 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I1017 09:56:15.266090 14220 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1017 09:56:16.166065 14220 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32791 SSHKeyPath:/home/magnus/.minikube/machines/minikube/id_rsa Username:docker} I1017 09:56:16.313106 14220 ssh_runner.go:188] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.0470665s) I1017 09:56:16.313342 14220 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/1811/cgroup I1017 09:56:16.331530 14220 api_server.go:162] apiserver freezer: "20:freezer:/docker/998eb37df12ff1165d8dcf2b489a6bbee53a790d21d85bde9ec4f444fd4aeb33/kubepods/burstable/podf7c3d51df5e2ce4e433b64661ac4503c/f6ac8cab74f09f03840008ded55ad716c81154a6ea3ee6ee7a01a1b4814df029" I1017 09:56:16.332396 14220 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/998eb37df12ff1165d8dcf2b489a6bbee53a790d21d85bde9ec4f444fd4aeb33/kubepods/burstable/podf7c3d51df5e2ce4e433b64661ac4503c/f6ac8cab74f09f03840008ded55ad716c81154a6ea3ee6ee7a01a1b4814df029/freezer.state I1017 09:56:16.349762 14220 api_server.go:184] freezer state: "THAWED" I1017 09:56:16.350024 14220 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32788/healthz ... I1017 09:56:16.361090 14220 api_server.go:241] https://127.0.0.1:32788/healthz returned 200: ok I1017 09:56:16.361415 14220 tunnel.go:57] Checking for tunnels to cleanup... I1017 09:56:16.363112 14220 kapi.go:59] client config for minikube: &rest.Config{Host:"https://127.0.0.1:32788", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/magnus/.minikube/profiles/minikube/client.crt", KeyFile:"/home/magnus/.minikube/profiles/minikube/client.key", CAFile:"/home/magnus/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16cdfd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)} I1017 09:56:16.365739 14220 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1017 09:56:17.254039 14220 out.go:109] πŸƒ Starting tunnel for service balanced. πŸƒ Starting tunnel for service balanced. I1017 09:56:17.259537 14220 loadbalancer_patcher.go:121] Patched balanced with IP 127.0.0.1

Full output of minikube start command used, if not already included:

πŸ˜„ minikube v1.14.0 on Ubuntu 18.04 ✨ Using the docker driver based on user configuration πŸ‘ Starting control plane node minikube in cluster minikube πŸ”₯ Creating docker container (CPUs=2, Memory=4700MB) ... 🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.8 ... πŸ”Ž Verifying Kubernetes components... 🌟 Enabled addons: storage-provisioner, default-storageclass πŸ„ Done! kubectl is now configured to use "minikube" by default

Optional: Full output of minikube logs command:

-- Logs begin at Sat 2020-10-17 07:51:33 UTC, end at Sat 2020-10-17 08:04:30 UTC. -- Oct 17 07:51:33 minikube systemd[1]: Starting Docker Application Container Engine... Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.461579000Z" level=info msg="Starting up" Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.468133400Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.468221700Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.468349400Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.468364400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.507897200Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.508005100Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.508022900Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.508029900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.846226600Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.925164700Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.925304400Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.925318200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.925323900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.925329300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.925339400Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.925739100Z" level=info msg="Loading containers: start." Oct 17 07:51:33 minikube dockerd[151]: time="2020-10-17T07:51:33.938317000Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/4.19.128-microsoft-standard\nmodprobe: WARNING: Module br_netfilter not found in directory /lib/modules/4.19.128-microsoft-standard\n, error: exit status 1" Oct 17 07:51:34 minikube dockerd[151]: time="2020-10-17T07:51:34.083750300Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 17 07:51:34 minikube dockerd[151]: time="2020-10-17T07:51:34.172629800Z" level=info msg="Loading containers: done." Oct 17 07:51:34 minikube dockerd[151]: time="2020-10-17T07:51:34.312645900Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Oct 17 07:51:34 minikube dockerd[151]: time="2020-10-17T07:51:34.312913100Z" level=info msg="Daemon has completed initialization" Oct 17 07:51:34 minikube dockerd[151]: time="2020-10-17T07:51:34.558826600Z" level=info msg="API listen on /run/docker.sock" Oct 17 07:51:34 minikube systemd[1]: Started Docker Application Container Engine. Oct 17 07:51:49 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Oct 17 07:51:49 minikube systemd[1]: Stopping Docker Application Container Engine... Oct 17 07:51:49 minikube dockerd[151]: time="2020-10-17T07:51:49.642920900Z" level=info msg="Processing signal 'terminated'" Oct 17 07:51:49 minikube dockerd[151]: time="2020-10-17T07:51:49.645059700Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Oct 17 07:51:49 minikube dockerd[151]: time="2020-10-17T07:51:49.645468400Z" level=info msg="Daemon shutdown complete" Oct 17 07:51:49 minikube systemd[1]: docker.service: Succeeded. Oct 17 07:51:49 minikube systemd[1]: Stopped Docker Application Container Engine. Oct 17 07:51:49 minikube systemd[1]: Starting Docker Application Container Engine... Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.747029500Z" level=info msg="Starting up" Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.750249500Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.750372900Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.750395200Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.750404000Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.754008200Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.754040300Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.754052200Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.754057700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.769219200Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.780798500Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.780895200Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.780904500Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.780908900Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.780913200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.780917200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.781210100Z" level=info msg="Loading containers: start." Oct 17 07:51:49 minikube dockerd[529]: time="2020-10-17T07:51:49.783675100Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: WARNING: Module bridge not found in directory /lib/modules/4.19.128-microsoft-standard\nmodprobe: WARNING: Module br_netfilter not found in directory /lib/modules/4.19.128-microsoft-standard\n, error: exit status 1" Oct 17 07:51:50 minikube dockerd[529]: time="2020-10-17T07:51:50.033102100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 17 07:51:50 minikube dockerd[529]: time="2020-10-17T07:51:50.103298100Z" level=info msg="Loading containers: done." Oct 17 07:51:50 minikube dockerd[529]: time="2020-10-17T07:51:50.141701900Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Oct 17 07:51:50 minikube dockerd[529]: time="2020-10-17T07:51:50.141865100Z" level=info msg="Daemon has completed initialization" Oct 17 07:51:50 minikube systemd[1]: Started Docker Application Container Engine. Oct 17 07:51:50 minikube dockerd[529]: time="2020-10-17T07:51:50.174729200Z" level=info msg="API listen on /var/run/docker.sock" Oct 17 07:51:50 minikube dockerd[529]: time="2020-10-17T07:51:50.174881000Z" level=info msg="API listen on [::]:2376" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 1181d1ebdd58b k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb 8 minutes ago Running echoserver 0 8a6b679b88747 bca2678be5e0b bad58561c4be7 11 minutes ago Running storage-provisioner 0 6ac0876b5c4be 744fd193e9916 bfe3a36ebd252 12 minutes ago Running coredns 0 9027a4b429831 bd5fb2a9f0584 d373dd5a8593a 12 minutes ago Running kube-proxy 0 f0c39b0e97d54 b3f3a7b3e6d8f 8603821e1a7a5 12 minutes ago Running kube-controller-manager 0 615e0b10cd48f 7d3f4207c57c6 2f32d66b884f8 12 minutes ago Running kube-scheduler 0 9b9dc59762d22 980df55052af3 0369cf4303ffd 12 minutes ago Running etcd 0 7c451c373dc54 f6ac8cab74f09 607331163122e 12 minutes ago Running kube-apiserver 0 dca449d8b0b6e ==> coredns [744fd193e991] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=b09ee50ec047410326a85435f4d99026f9c4f5c4 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_10_17T09_52_14_0700 minikube.k8s.io/version=v1.14.0 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 17 Oct 2020 07:52:10 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Sat, 17 Oct 2020 08:04:29 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 17 Oct 2020 08:01:24 +0000 Sat, 17 Oct 2020 07:52:05 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 17 Oct 2020 08:01:24 +0000 Sat, 17 Oct 2020 07:52:05 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 17 Oct 2020 08:01:24 +0000 Sat, 17 Oct 2020 07:52:05 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 17 Oct 2020 08:01:24 +0000 Sat, 17 Oct 2020 07:52:28 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 6 ephemeral-storage: 263174212Ki hugepages-2Mi: 0 memory: 19654256Ki pods: 110 Allocatable: cpu: 6 ephemeral-storage: 263174212Ki hugepages-2Mi: 0 memory: 19654256Ki pods: 110 System Info: Machine ID: d6b9286a0a014d538dfffe146e36fa61 System UUID: d6b9286a0a014d538dfffe146e36fa61 Boot ID: fd76d799-8bce-470f-9cb7-a46f6e9cc392 Kernel Version: 4.19.128-microsoft-standard OS Image: Ubuntu 20.04 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.8 Kubelet Version: v1.19.2 Kube-Proxy Version: v1.19.2 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default balanced-5744b548b4-kblsz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m16s kube-system coredns-f9fd979d6-2wm6f 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 12m kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m kube-system kube-apiserver-minikube 250m (4%) 0 (0%) 0 (0%) 0 (0%) 12m kube-system kube-controller-manager-minikube 200m (3%) 0 (0%) 0 (0%) 0 (0%) 12m kube-system kube-proxy-shg2l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m kube-system kube-scheduler-minikube 100m (1%) 0 (0%) 0 (0%) 0 (0%) 12m kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (10%) 0 (0%) memory 70Mi (0%) 170Mi (0%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 12m kubelet Starting kubelet. Normal NodeHasSufficientMemory 12m (x3 over 12m) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 12m (x3 over 12m) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 12m (x4 over 12m) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods Normal Starting 12m kubelet Starting kubelet. Normal NodeHasSufficientMemory 12m kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 12m kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 12m kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeNotReady 12m kubelet Node minikube status is now: NodeNotReady Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods Normal Starting 12m kube-proxy Starting kube-proxy. Normal NodeReady 12m kubelet Node minikube status is now: NodeReady ==> dmesg <== [ +0.000026] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000003] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000003] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +1.005525] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000024] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +4.018271] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000020] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000003] init: (110) ERROR: StartHostListener:356: write failed 32 [ +1.005504] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000009] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000005] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000005] init: (110) ERROR: StartHostListener:356: write failed 32 [ +4.025959] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000138] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000006] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000005] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000005] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000005] init: (110) ERROR: StartHostListener:356: write failed 32 [ +1.004871] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000079] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000006] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +4.018193] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000020] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000003] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000003] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +1.004049] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000007] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +1.132525] WSL2: Performing memory compaction. [ +2.889256] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000027] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000005] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000005] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000005] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +1.005229] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000020] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000005] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000003] init: (110) ERROR: StartHostListener:356: write failed 32 [ +4.031688] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000016] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000003] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000003] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000003] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000003] init: (110) ERROR: StartHostListener:356: write failed 32 [ +1.009316] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000029] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 [ +0.000004] init: (110) ERROR: StartHostListener:356: write failed 32 ==> etcd [980df55052af] <== 2020-10-17 07:55:04.898245 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:55:14.899399 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:55:25.112047 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:55:35.268262 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:55:44.976220 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:55:54.899336 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:56:04.899513 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:56:14.899459 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:56:24.898979 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:56:34.899292 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:56:44.898740 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:56:54.899614 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:57:04.899185 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:57:14.899593 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:57:24.899492 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:57:34.899852 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:57:44.898579 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:57:54.899073 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:58:04.898827 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:58:14.900961 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:58:24.898628 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:58:34.899873 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:58:44.900306 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:58:54.899001 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:59:04.898576 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:59:14.898625 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:59:24.898971 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:59:34.899116 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:59:44.901883 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 07:59:54.899491 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:00:04.899630 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:00:14.898550 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:00:24.898635 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:00:34.898852 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:00:44.899350 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:00:54.899227 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:01:04.899181 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:01:14.898643 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:01:24.898979 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:01:34.899115 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:01:44.902411 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:01:54.898317 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:02:04.899206 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:02:05.683170 I | mvcc: store.index: compact 639 2020-10-17 08:02:05.684348 I | mvcc: finished scheduled compaction at 639 (took 906.1Β΅s) 2020-10-17 08:02:14.898624 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:02:24.899623 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:02:34.898664 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:02:44.898282 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:02:54.899104 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:03:04.898860 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:03:14.899218 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:03:24.898594 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:03:34.898633 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:03:44.899122 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:03:54.993207 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:04:01.536797 W | etcdserver: read-only range request "key:\"/registry/services/specs\" range_end:\"/registry/services/spect\" count_only:true " with result "range_response_count:0 size:7" took too long (236.4082ms) to execute 2020-10-17 08:04:04.904276 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:04:14.899424 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-17 08:04:24.901436 I | etcdserver/api/etcdhttp: /health OK (status code 200) ==> kernel <== 08:04:32 up 2:10, 0 users, load average: 1.64, 1.11, 1.53 Linux minikube 4.19.128-microsoft-standard #1 SMP Tue Jun 23 12:58:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04 LTS" ==> kube-apiserver [f6ac8cab74f0] <== I1017 07:52:16.871288 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I1017 07:52:21.600104 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps I1017 07:52:21.609400 1 controller.go:606] quota admission added evaluator for: replicasets.apps I1017 07:52:49.092392 1 client.go:360] parsed scheme: "passthrough" I1017 07:52:49.092631 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 07:52:49.092655 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 07:53:24.656313 1 client.go:360] parsed scheme: "passthrough" I1017 07:53:24.656602 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 07:53:24.656634 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 07:54:07.626251 1 client.go:360] parsed scheme: "passthrough" I1017 07:54:07.626533 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 07:54:07.626564 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 07:54:50.742868 1 client.go:360] parsed scheme: "passthrough" I1017 07:54:50.743002 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 07:54:50.743026 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 07:55:25.289606 1 client.go:360] parsed scheme: "passthrough" I1017 07:55:25.289843 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 07:55:25.289860 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 07:56:06.587111 1 client.go:360] parsed scheme: "passthrough" I1017 07:56:06.587238 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 07:56:06.587251 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 07:56:40.953884 1 client.go:360] parsed scheme: "passthrough" I1017 07:56:40.954330 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 07:56:40.954380 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 07:57:18.315744 1 client.go:360] parsed scheme: "passthrough" I1017 07:57:18.316015 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 07:57:18.316041 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 07:57:59.442524 1 client.go:360] parsed scheme: "passthrough" I1017 07:57:59.442827 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 07:57:59.442853 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 07:58:36.424624 1 client.go:360] parsed scheme: "passthrough" I1017 07:58:36.424825 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 07:58:36.424843 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 07:59:07.678652 1 client.go:360] parsed scheme: "passthrough" I1017 07:59:07.678816 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 07:59:07.678834 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 07:59:46.670999 1 client.go:360] parsed scheme: "passthrough" I1017 07:59:46.671276 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 07:59:46.671301 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 08:00:29.505896 1 client.go:360] parsed scheme: "passthrough" I1017 08:00:29.506373 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 08:00:29.506408 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 08:01:10.771558 1 client.go:360] parsed scheme: "passthrough" I1017 08:01:10.771830 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 08:01:10.771928 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 08:01:55.372554 1 client.go:360] parsed scheme: "passthrough" I1017 08:01:55.372850 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 08:01:55.372871 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 08:02:34.983393 1 client.go:360] parsed scheme: "passthrough" I1017 08:02:34.983578 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 08:02:34.983598 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 08:03:05.078413 1 client.go:360] parsed scheme: "passthrough" I1017 08:03:05.078613 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 08:03:05.078630 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 08:03:42.845529 1 client.go:360] parsed scheme: "passthrough" I1017 08:03:42.845725 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 08:03:42.845741 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1017 08:04:24.070574 1 client.go:360] parsed scheme: "passthrough" I1017 08:04:24.078065 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1017 08:04:24.079139 1 clientconn.go:948] ClientConn switching balancer to "pick_first" ==> kube-controller-manager [b3f3a7b3e6d8] <== I1017 07:52:20.779667 1 request.go:645] Throttling request took 1.0097746s, request: GET:https://192.168.49.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s I1017 07:52:20.996079 1 controllermanager.go:549] Started "horizontalpodautoscaling" I1017 07:52:20.996127 1 horizontal.go:169] Starting HPA controller I1017 07:52:20.996221 1 shared_informer.go:240] Waiting for caches to sync for HPA I1017 07:52:21.248372 1 controllermanager.go:549] Started "bootstrapsigner" I1017 07:52:21.248549 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer I1017 07:52:21.497770 1 controllermanager.go:549] Started "persistentvolume-binder" I1017 07:52:21.497805 1 pv_controller_base.go:303] Starting persistent volume controller I1017 07:52:21.498573 1 shared_informer.go:240] Waiting for caches to sync for persistent volume I1017 07:52:21.500143 1 shared_informer.go:240] Waiting for caches to sync for resource quota W1017 07:52:21.509788 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1017 07:52:21.510113 1 shared_informer.go:247] Caches are synced for PV protection I1017 07:52:21.523625 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1017 07:52:21.524035 1 shared_informer.go:247] Caches are synced for GC I1017 07:52:21.545626 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I1017 07:52:21.568844 1 shared_informer.go:247] Caches are synced for bootstrap_signer I1017 07:52:21.570189 1 shared_informer.go:247] Caches are synced for ReplicationController I1017 07:52:21.570345 1 shared_informer.go:247] Caches are synced for stateful set I1017 07:52:21.570370 1 shared_informer.go:247] Caches are synced for attach detach I1017 07:52:21.575201 1 shared_informer.go:247] Caches are synced for endpoint I1017 07:52:21.583026 1 shared_informer.go:247] Caches are synced for TTL I1017 07:52:21.586056 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1017 07:52:21.586200 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1017 07:52:21.586273 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1017 07:52:21.586319 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1017 07:52:21.586336 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1017 07:52:21.593489 1 shared_informer.go:247] Caches are synced for daemon sets I1017 07:52:21.595097 1 shared_informer.go:247] Caches are synced for job I1017 07:52:21.596513 1 shared_informer.go:247] Caches are synced for HPA I1017 07:52:21.597335 1 shared_informer.go:247] Caches are synced for PVC protection I1017 07:52:21.597452 1 shared_informer.go:247] Caches are synced for endpoint_slice I1017 07:52:21.598058 1 shared_informer.go:247] Caches are synced for service account I1017 07:52:21.598423 1 shared_informer.go:247] Caches are synced for ReplicaSet I1017 07:52:21.604443 1 shared_informer.go:247] Caches are synced for deployment I1017 07:52:21.607874 1 shared_informer.go:247] Caches are synced for namespace I1017 07:52:21.608223 1 shared_informer.go:247] Caches are synced for disruption I1017 07:52:21.608355 1 disruption.go:339] Sending events to api server. I1017 07:52:21.620386 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-shg2l" I1017 07:52:21.620539 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1" I1017 07:52:21.688064 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-2wm6f" I1017 07:52:21.695214 1 shared_informer.go:247] Caches are synced for taint I1017 07:52:21.695637 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W1017 07:52:21.695730 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. I1017 07:52:21.695774 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I1017 07:52:21.696331 1 taint_manager.go:187] Starting NoExecuteTaintManager I1017 07:52:21.697729 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I1017 07:52:21.699753 1 shared_informer.go:247] Caches are synced for persistent volume I1017 07:52:21.769097 1 shared_informer.go:247] Caches are synced for expand I1017 07:52:21.771869 1 shared_informer.go:247] Caches are synced for resource quota I1017 07:52:21.778754 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I1017 07:52:21.778804 1 event.go:291] "Event occurred" object="kube-system/etcd-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" I1017 07:52:21.800310 1 shared_informer.go:247] Caches are synced for resource quota I1017 07:52:21.853203 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1017 07:52:22.254182 1 shared_informer.go:247] Caches are synced for garbage collector I1017 07:52:22.294871 1 shared_informer.go:247] Caches are synced for garbage collector I1017 07:52:22.295032 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1017 07:52:31.831125 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. I1017 07:52:31.832532 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6-2wm6f" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-f9fd979d6-2wm6f" I1017 07:55:15.911562 1 event.go:291] "Event occurred" object="default/balanced" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set balanced-5744b548b4 to 1" I1017 07:55:16.009904 1 event.go:291] "Event occurred" object="default/balanced-5744b548b4" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: balanced-5744b548b4-kblsz" ==> kube-proxy [bd5fb2a9f058] <== I1017 07:52:22.926906 1 node.go:136] Successfully retrieved node IP: 192.168.49.2 I1017 07:52:22.927050 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation W1017 07:52:22.947842 1 proxier.go:639] Failed to read file /lib/modules/4.19.128-microsoft-standard/modules.builtin with error open /lib/modules/4.19.128-microsoft-standard/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W1017 07:52:22.950272 1 proxier.go:649] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W1017 07:52:22.952158 1 proxier.go:649] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W1017 07:52:22.953832 1 proxier.go:649] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W1017 07:52:22.969981 1 proxier.go:649] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W1017 07:52:22.972380 1 proxier.go:649] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W1017 07:52:22.972756 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy I1017 07:52:22.972999 1 server_others.go:186] Using iptables Proxier. W1017 07:52:22.973013 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I1017 07:52:22.973016 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I1017 07:52:22.973415 1 server.go:650] Version: v1.19.2 I1017 07:52:22.974039 1 conntrack.go:52] Setting nf_conntrack_max to 196608 I1017 07:52:22.974302 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1017 07:52:22.974379 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1017 07:52:22.974825 1 config.go:315] Starting service config controller I1017 07:52:22.974851 1 shared_informer.go:240] Waiting for caches to sync for service config I1017 07:52:22.975305 1 config.go:224] Starting endpoint slice config controller I1017 07:52:22.975331 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I1017 07:52:23.075425 1 shared_informer.go:247] Caches are synced for service config I1017 07:52:23.075679 1 shared_informer.go:247] Caches are synced for endpoint slice config ==> kube-scheduler [7d3f4207c57c] <== I1017 07:52:04.221699 1 registry.go:173] Registering SelectorSpread plugin I1017 07:52:04.221761 1 registry.go:173] Registering SelectorSpread plugin I1017 07:52:05.378446 1 serving.go:331] Generated self-signed cert in-memory W1017 07:52:10.901778 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1017 07:52:10.902388 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1017 07:52:10.902462 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous. W1017 07:52:10.902631 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1017 07:52:10.982752 1 registry.go:173] Registering SelectorSpread plugin I1017 07:52:10.982961 1 registry.go:173] Registering SelectorSpread plugin I1017 07:52:10.987387 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1017 07:52:10.987442 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1017 07:52:10.987463 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I1017 07:52:10.987554 1 tlsconfig.go:240] Starting DynamicServingCertificateController E1017 07:52:10.989245 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1017 07:52:10.990367 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1017 07:52:10.990479 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1017 07:52:10.990567 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1017 07:52:10.990658 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1017 07:52:10.993692 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1017 07:52:10.993933 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1017 07:52:10.993699 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1017 07:52:10.994483 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1017 07:52:10.994669 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1017 07:52:10.994895 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1017 07:52:10.994697 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1017 07:52:10.998103 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1017 07:52:11.854534 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1017 07:52:11.862512 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1017 07:52:11.969902 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1017 07:52:12.014665 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1017 07:52:12.069299 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1017 07:52:12.089079 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1017 07:52:12.098966 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1017 07:52:12.169817 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1017 07:52:12.171216 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1017 07:52:12.171369 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1017 07:52:12.469405 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1017 07:52:12.569642 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1017 07:52:12.602926 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope I1017 07:52:15.087936 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== -- Logs begin at Sat 2020-10-17 07:51:33 UTC, end at Sat 2020-10-17 08:04:33 UTC. -- Oct 17 07:52:16 minikube kubelet[2422]: I1017 07:52:16.891616 2422 kubelet.go:1741] Starting kubelet main sync loop. Oct 17 07:52:16 minikube kubelet[2422]: E1017 07:52:16.891748 2422 kubelet.go:1765] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] Oct 17 07:52:17 minikube kubelet[2422]: E1017 07:52:17.287660 2422 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet Oct 17 07:52:17 minikube kubelet[2422]: I1017 07:52:17.422398 2422 kubelet_node_status.go:70] Attempting to register node minikube Oct 17 07:52:17 minikube kubelet[2422]: E1017 07:52:17.487813 2422 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet Oct 17 07:52:17 minikube kubelet[2422]: I1017 07:52:17.491042 2422 kubelet_node_status.go:108] Node minikube was previously registered Oct 17 07:52:17 minikube kubelet[2422]: I1017 07:52:17.491291 2422 kubelet_node_status.go:73] Successfully registered node minikube Oct 17 07:52:17 minikube kubelet[2422]: I1017 07:52:17.610003 2422 setters.go:555] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-10-17 07:52:17.6098559 +0000 UTC m=+2.983704201 LastTransitionTime:2020-10-17 07:52:17.6098559 +0000 UTC m=+2.983704201 Reason:KubeletNotReady Message:container runtime status check may not have completed yet} Oct 17 07:52:17 minikube kubelet[2422]: E1017 07:52:17.894796 2422 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.093543 2422 cpu_manager.go:184] [cpumanager] starting with none policy Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.093562 2422 cpu_manager.go:185] [cpumanager] reconciling every 10s Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.093599 2422 state_mem.go:36] [cpumanager] initializing new in-memory state store Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.093839 2422 state_mem.go:88] [cpumanager] updated default cpuset: "" Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.093851 2422 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]" Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.093875 2422 policy_none.go:43] [cpumanager] none policy: Start Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.097722 2422 plugin_manager.go:114] Starting Kubelet Plugin Manager Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.695648 2422 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.703017 2422 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.709569 2422 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.725741 2422 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.808677 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/d186e6390814d4dd7e770f47c08e98a2-etcd-certs") pod "etcd-minikube" (UID: "d186e6390814d4dd7e770f47c08e98a2") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.808722 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/d186e6390814d4dd7e770f47c08e98a2-etcd-data") pod "etcd-minikube" (UID: "d186e6390814d4dd7e770f47c08e98a2") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.808741 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-ca-certs") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.808775 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.808791 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-k8s-certs") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.808806 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.808822 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/f7c3d51df5e2ce4e433b64661ac4503c-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "f7c3d51df5e2ce4e433b64661ac4503c") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.808831 2422 reconciler.go:157] Reconciler: start to sync state Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.909634 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-ca-certs") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.909742 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-kubeconfig") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.909764 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.909786 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.909797 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ff7d12f9e4f14e202a85a7c5534a3129-kubeconfig") pod "kube-scheduler-minikube" (UID: "ff7d12f9e4f14e202a85a7c5534a3129") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.909831 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.909841 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 17 07:52:18 minikube kubelet[2422]: I1017 07:52:18.909878 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/dcc127c185c80a61d90d8e659e768641-k8s-certs") pod "kube-controller-manager-minikube" (UID: "dcc127c185c80a61d90d8e659e768641") Oct 17 07:52:21 minikube kubelet[2422]: I1017 07:52:21.672431 2422 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 17 07:52:21 minikube kubelet[2422]: I1017 07:52:21.788137 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/6f28a6ee-363e-4d75-a9cb-cceec7e26362-lib-modules") pod "kube-proxy-shg2l" (UID: "6f28a6ee-363e-4d75-a9cb-cceec7e26362") Oct 17 07:52:21 minikube kubelet[2422]: I1017 07:52:21.788198 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/6f28a6ee-363e-4d75-a9cb-cceec7e26362-kube-proxy") pod "kube-proxy-shg2l" (UID: "6f28a6ee-363e-4d75-a9cb-cceec7e26362") Oct 17 07:52:21 minikube kubelet[2422]: I1017 07:52:21.788229 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/6f28a6ee-363e-4d75-a9cb-cceec7e26362-xtables-lock") pod "kube-proxy-shg2l" (UID: "6f28a6ee-363e-4d75-a9cb-cceec7e26362") Oct 17 07:52:21 minikube kubelet[2422]: I1017 07:52:21.788248 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-mlbng" (UniqueName: "kubernetes.io/secret/6f28a6ee-363e-4d75-a9cb-cceec7e26362-kube-proxy-token-mlbng") pod "kube-proxy-shg2l" (UID: "6f28a6ee-363e-4d75-a9cb-cceec7e26362") Oct 17 07:52:28 minikube kubelet[2422]: I1017 07:52:28.841148 2422 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 17 07:52:28 minikube kubelet[2422]: I1017 07:52:28.885045 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-vxjzd" (UniqueName: "kubernetes.io/secret/bc7e33bf-eb05-4fad-9418-8aab900a7aef-coredns-token-vxjzd") pod "coredns-f9fd979d6-2wm6f" (UID: "bc7e33bf-eb05-4fad-9418-8aab900a7aef") Oct 17 07:52:28 minikube kubelet[2422]: I1017 07:52:28.885149 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bc7e33bf-eb05-4fad-9418-8aab900a7aef-config-volume") pod "coredns-f9fd979d6-2wm6f" (UID: "bc7e33bf-eb05-4fad-9418-8aab900a7aef") Oct 17 07:52:29 minikube kubelet[2422]: W1017 07:52:29.647463 2422 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-2wm6f through plugin: invalid network status for Oct 17 07:52:30 minikube kubelet[2422]: W1017 07:52:30.277827 2422 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-2wm6f through plugin: invalid network status for Oct 17 07:52:31 minikube kubelet[2422]: W1017 07:52:31.298512 2422 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-2wm6f through plugin: invalid network status for Oct 17 07:52:38 minikube kubelet[2422]: I1017 07:52:38.682393 2422 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 17 07:52:38 minikube kubelet[2422]: I1017 07:52:38.779624 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/ba94a85f-81d6-4c59-995a-1e81e5019181-tmp") pod "storage-provisioner" (UID: "ba94a85f-81d6-4c59-995a-1e81e5019181") Oct 17 07:52:38 minikube kubelet[2422]: I1017 07:52:38.780215 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-rqbxt" (UniqueName: "kubernetes.io/secret/ba94a85f-81d6-4c59-995a-1e81e5019181-storage-provisioner-token-rqbxt") pod "storage-provisioner" (UID: "ba94a85f-81d6-4c59-995a-1e81e5019181") Oct 17 07:55:16 minikube kubelet[2422]: I1017 07:55:16.073817 2422 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 17 07:55:16 minikube kubelet[2422]: I1017 07:55:16.269834 2422 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-wldbj" (UniqueName: "kubernetes.io/secret/ea91a0e9-0f63-4402-b636-89915f8d79eb-default-token-wldbj") pod "balanced-5744b548b4-kblsz" (UID: "ea91a0e9-0f63-4402-b636-89915f8d79eb") Oct 17 07:55:17 minikube kubelet[2422]: W1017 07:55:17.199722 2422 pod_container_deletor.go:79] Container "8a6b679b88747c55e7678d5abcad4c10ce5d7b19039db10eea611c0b43261ec0" not found in pod's containers Oct 17 07:55:17 minikube kubelet[2422]: W1017 07:55:17.200231 2422 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/balanced-5744b548b4-kblsz through plugin: invalid network status for Oct 17 07:55:18 minikube kubelet[2422]: W1017 07:55:18.214829 2422 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/balanced-5744b548b4-kblsz through plugin: invalid network status for Oct 17 07:55:49 minikube kubelet[2422]: W1017 07:55:49.193294 2422 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/balanced-5744b548b4-kblsz through plugin: invalid network status for Oct 17 07:57:18 minikube kubelet[2422]: W1017 07:57:18.082380 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology Oct 17 07:57:18 minikube kubelet[2422]: W1017 07:57:18.083050 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory Oct 17 08:02:18 minikube kubelet[2422]: W1017 08:02:18.080659 2422 sysinfo.go:203] Nodes topology is not available, providing CPU topology Oct 17 08:02:18 minikube kubelet[2422]: W1017 08:02:18.081188 2422 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory ==> storage-provisioner [bca2678be5e0] <== I1017 07:52:39.428846 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I1017 07:52:39.470518 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I1017 07:52:39.471024 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_87ecdebe-5bad-4f42-86d5-5bba5f10b609! I1017 07:52:39.471064 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bc1fcab4-43c5-492b-8538-8fae0d12ec0c", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_87ecdebe-5bad-4f42-86d5-5bba5f10b609 became leader I1017 07:52:39.572040 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_87ecdebe-5bad-4f42-86d5-5bba5f10b609!
devniel commented 3 years ago

In my case, I don't get the default output, so it seems that it isn't working. When there is a service and I run minikube tunnel again there is an output related to the active service but no the default one described in https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel.

➜  ~ minikube tunnel --alsologtostderr --v=1
W1018 23:25:10.215577   10338 root.go:252] Error reading config file at /home/devniel/.minikube/config/config.json: open /home/devniel/.minikube/config/config.json: no such file or directory
I1018 23:25:10.215723   10338 mustload.go:66] Loading cluster: minikube
I1018 23:25:10.216239   10338 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}}
I1018 23:25:10.246651   10338 host.go:65] Checking if "minikube" exists ...
I1018 23:25:10.246883   10338 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I1018 23:25:10.278286   10338 api_server.go:146] Checking apiserver status ...
I1018 23:25:10.278366   10338 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 23:25:10.278447   10338 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1018 23:25:10.311015   10338 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32775 SSHKeyPath:/home/devniel/.minikube/machines/minikube/id_rsa Username:docker}
I1018 23:25:10.417857   10338 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/1827/cgroup
I1018 23:25:10.425260   10338 api_server.go:162] apiserver freezer: "7:freezer:/docker/07806c41de29446f7acd8e727f2b92ccee11a851a86a16638ff6bef17a4f84fc/kubepods/burstable/podf7c3d51df5e2ce4e433b64661ac4503c/5f51bb7eafac387a26fe69a5c5462156111b47515e95c48707c9179331ef0e8e"
I1018 23:25:10.425363   10338 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/07806c41de29446f7acd8e727f2b92ccee11a851a86a16638ff6bef17a4f84fc/kubepods/burstable/podf7c3d51df5e2ce4e433b64661ac4503c/5f51bb7eafac387a26fe69a5c5462156111b47515e95c48707c9179331ef0e8e/freezer.state
I1018 23:25:10.431538   10338 api_server.go:184] freezer state: "THAWED"
I1018 23:25:10.431594   10338 api_server.go:221] Checking apiserver healthz at https://127.0.0.1:32772/healthz ...
I1018 23:25:10.436357   10338 api_server.go:241] https://127.0.0.1:32772/healthz returned 200:
ok
I1018 23:25:10.436426   10338 tunnel.go:57] Checking for tunnels to cleanup...
I1018 23:25:10.437283   10338 kapi.go:59] client config for minikube: &rest.Config{Host:"https://127.0.0.1:32772", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/devniel/.minikube/profiles/minikube/client.crt", KeyFile:"/home/devniel/.minikube/profiles/minikube/client.key", CAFile:"/home/devniel/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16cdfd0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil)}
I1018 23:25:10.438374   10338 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
GRXself commented 3 years ago

Try set expose port to 8080: kubectl expose deployment balanced --type=LoadBalancer --port=8080 I was learning the k8s and found that if you set a different port from the image(in image k8s.gcr.io/echoserver:1.4, I think the port is 8080), it will keep reseting your connection.

tstromberg commented 3 years ago

I'm honestly a bit befuddled by this. @medyagh - any clue here?

tstromberg commented 3 years ago

This seems quite related to #9498 - which leads me to think there may be an issue with tunnel on Docker, rather than specific to WSL2.

tstromberg commented 3 years ago

I'm going to de-dup this against #9498 for now, as I'm pretty sure the root cause is the same.

magnus-larsson commented 3 years ago

Thanks for clarifying this!

I can confirm that following the updated documentation as described in #9498, i.e. change the port from 8000 to 8080, solves the problem!