kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.2k stars 4.87k forks source link

Can't get ingress-dns example to work on k8s 1.18 #10517

Closed reinvantveer closed 3 years ago

reinvantveer commented 3 years ago

Steps to reproduce the issue: I feel a bit silly but I can't get the https://github.com/kubernetes/minikube/blob/master/deploy/addons/ingress-dns/example/example.yaml to give me a ping and I don't know where to begin to troubleshoot.

For starters: we don't work on k8s 1.19+ so I need kubectl apply -f https://raw.githubusercontent.com/kubernetes/minikube/8209421c3e5ef0af4ca92a30da25b5e75c9255bc/deploy/addons/ingress-dns/example/example.yaml as a fallback option. But still, it doesn't provide me with a correct ping output.

  1. Execute (can be done in one go):
    
    # Just sort of following https://github.com/kubernetes/minikube/tree/master/deploy/addons/ingress-dns#linux
    # But then for k8s 1.18

I ran minikube delete --all --purge beforehand

minikube start \ --cpus=2 \ --kubernetes-version=v1.18.15 \ --driver=docker \ --memory=2g \ --disk-size=2gb

minikube addons enable ingress-dns

sleep 10 # Allow $(minikube ip) to become available

echo "search test" | sudo tee -a /etc/resolvconf/resolv.conf.d/base echo "nameserver $(minikube ip)" | sudo tee -a /etc/resolvconf/resolv.conf.d/base echo "timeout 5" | sudo tee -a /etc/resolvconf/resolv.conf.d/base

sudo resolvconf -u sudo systemctl disable --now resolvconf.service

kubectl apply -f https://raw.githubusercontent.com/kubernetes/minikube/\ 8209421c3e5ef0af4ca92a30da25b5e75c9255bc/deploy/addons/ingress-dns/\ example/example.yaml # Use a k8s 1.18 beta networking version

ping hello-john.test



<!--- TIP: Add the "--alsologtostderr" flag to the command-line for more logs --->
**Full output of failed command:** 
ping: hello-john.test: Name or service not known

- Note1: the instructions state that "you _can_ [my emphasis] add an additional configuration file" when using Network Manager with the dnsmasq plugin (which I do), but it doesn't state I _should_, so I'd rather not.

- Note2: there are a few tell-tale log messages in the full `minikube logs` that I highlighted in bold, it's in the vein of `failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-wnstl through plugin: invalid network status for`. What the `for` part is, is unknown: the message is truncated (but not by me) here. 

- Note3: I didn't install the `ingress` addon, the instructions didn't say I _should_ so I didn't.

**Full output of `minikube start` command used, if not already included:**
😄  minikube v1.17.1 on Ubuntu 20.04
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.18.15 preload ...
    > preloaded-images-k8s-v8-v1....: 512.05 MiB / 512.05 MiB  100.00% 5.65 MiB
🔥  Creating docker container (CPUs=2, Memory=2048MB) ...
🐳  Preparing Kubernetes v1.18.15 on Docker 20.10.2 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

**Optional: Full output of `minikube logs` command:**
<details>
❯ minikube logs
==> Docker <==
-- Logs begin at Fri 2021-02-19 16:44:45 UTC, end at Fri 2021-02-19 16:46:38 UTC. --
Feb 19 16:44:45 minikube dockerd[175]: time="2021-02-19T16:44:45.497813880Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 19 16:44:45 minikube dockerd[175]: time="2021-02-19T16:44:45.497846596Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 19 16:44:45 minikube dockerd[175]: time="2021-02-19T16:44:45.497881713Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb 19 16:44:45 minikube dockerd[175]: time="2021-02-19T16:44:45.497898835Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 19 16:44:45 minikube dockerd[175]: time="2021-02-19T16:44:45.501817729Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 19 16:44:45 minikube dockerd[175]: time="2021-02-19T16:44:45.501865656Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 19 16:44:45 minikube dockerd[175]: time="2021-02-19T16:44:45.501895922Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb 19 16:44:45 minikube dockerd[175]: time="2021-02-19T16:44:45.501912640Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 19 16:44:47 minikube dockerd[175]: time="2021-02-19T16:44:47.345867157Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Feb 19 16:44:47 minikube dockerd[175]: time="2021-02-19T16:44:47.392285593Z" level=warning msg="Your kernel does not support swap memory limit"
Feb 19 16:44:47 minikube dockerd[175]: time="2021-02-19T16:44:47.392308523Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
Feb 19 16:44:47 minikube dockerd[175]: time="2021-02-19T16:44:47.392315286Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Feb 19 16:44:47 minikube dockerd[175]: time="2021-02-19T16:44:47.392320701Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Feb 19 16:44:47 minikube dockerd[175]: time="2021-02-19T16:44:47.392499682Z" level=info msg="Loading containers: start."
Feb 19 16:44:47 minikube dockerd[175]: time="2021-02-19T16:44:47.577875894Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Feb 19 16:44:47 minikube dockerd[175]: time="2021-02-19T16:44:47.705592305Z" level=info msg="Loading containers: done."
Feb 19 16:44:48 minikube dockerd[175]: time="2021-02-19T16:44:48.210815166Z" level=info msg="Docker daemon" commit=8891c58 graphdriver(s)=overlay2 version=20.10.2
Feb 19 16:44:48 minikube dockerd[175]: time="2021-02-19T16:44:48.211150843Z" level=info msg="Daemon has completed initialization"
Feb 19 16:44:48 minikube systemd[1]: Started Docker Application Container Engine.
Feb 19 16:44:48 minikube dockerd[175]: time="2021-02-19T16:44:48.295062050Z" level=info msg="API listen on /run/docker.sock"
Feb 19 16:44:49 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
Feb 19 16:44:49 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed.
Feb 19 16:44:50 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
Feb 19 16:44:50 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
Feb 19 16:44:50 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
Feb 19 16:44:50 minikube systemd[1]: Stopping Docker Application Container Engine...
Feb 19 16:44:50 minikube dockerd[175]: time="2021-02-19T16:44:50.228247292Z" level=info msg="Processing signal 'terminated'"
Feb 19 16:44:50 minikube dockerd[175]: time="2021-02-19T16:44:50.230878291Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Feb 19 16:44:50 minikube dockerd[175]: time="2021-02-19T16:44:50.231425314Z" level=info msg="Daemon shutdown complete"
Feb 19 16:44:50 minikube systemd[1]: docker.service: Succeeded.
Feb 19 16:44:50 minikube systemd[1]: Stopped Docker Application Container Engine.
Feb 19 16:44:50 minikube systemd[1]: Starting Docker Application Container Engine...
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.280829078Z" level=info msg="Starting up"
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.282962518Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.282988930Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.283014624Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.283029290Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.283950424Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.283983502Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.284000932Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.284011296Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.295299581Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.308055806Z" level=warning msg="Your kernel does not support swap memory limit"
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.308102955Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.308118938Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.308132023Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.308620164Z" level=info msg="Loading containers: start."
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.438431245Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.495443267Z" level=info msg="Loading containers: done."
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.518578082Z" level=info msg="Docker daemon" commit=8891c58 graphdriver(s)=overlay2 version=20.10.2
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.518735965Z" level=info msg="Daemon has completed initialization"
Feb 19 16:44:50 minikube systemd[1]: Started Docker Application Container Engine.
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.575208443Z" level=info msg="API listen on [::]:2376"
Feb 19 16:44:50 minikube dockerd[419]: time="2021-02-19T16:44:50.580592913Z" level=info msg="API listen on /var/run/docker.sock"
Feb 19 16:44:51 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
Feb 19 16:44:53 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
Feb 19 16:45:14 minikube systemd[1]: /lib/systemd/system/docker.service:13: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring.
Feb 19 16:45:32 minikube dockerd[419]: time="2021-02-19T16:45:32.063714074Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
Feb 19 16:45:34 minikube dockerd[419]: time="2021-02-19T16:45:34.550583334Z" level=warning msg="Published ports are discarded when using host network mode"
Feb 19 16:45:34 minikube dockerd[419]: time="2021-02-19T16:45:34.585760853Z" level=warning msg="Published ports are discarded when using host network mode"

==> container status <==
CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID
d7dcaadca0429       gcr.io/google-samples/hello-app@sha256:c62ead5b8c15c231f9e786250b07909daf6c266d0fcddd93fea882eb722c3be4    40 seconds ago       Running             hello-world-app           0                   cff4936324314
5e3be70fe25d5       cryptexlabs/minikube-ingress-dns@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab   42 seconds ago       Running             minikube-ingress-dns      0                   04b50ed2ca24f
0f819bcf55d46       85069258b98ac                                                                                              About a minute ago   Running             storage-provisioner       0                   47456bdf31678
9990d248841f5       67da37a9a360e                                                                                              About a minute ago   Running             coredns                   0                   c309db2398d74
d37644ef3654c       6947b0d99ceb1                                                                                              About a minute ago   Running             kube-proxy                0                   4bb86abec586a
48fef2e22fc92       4b3915bbba95c                                                                                              About a minute ago   Running             kube-controller-manager   0                   6fdcd64f4a8ee
e9cbacb66c5cb       303ce5db0e90d                                                                                              About a minute ago   Running             etcd                      0                   e5b5eb9db0e0e
d62b94b18f377       21e89bb12d33b                                                                                              About a minute ago   Running             kube-apiserver            0                   1ef52f0bb1778
279df1772a354       db6167a559bac                                                                                              About a minute ago   Running             kube-scheduler            0                   49b21ef59d94e

==> coredns [9990d248841f] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b

==> describe nodes <==
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=minikube
                    kubernetes.io/os=linux
                    minikube.k8s.io/commit=043bdca07e54ab6e4fc0457e3064048f34133d7e
                    minikube.k8s.io/name=minikube
                    minikube.k8s.io/updated_at=2021_02_19T17_45_14_0700
                    minikube.k8s.io/version=v1.17.1
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 19 Feb 2021 16:45:10 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  minikube
  AcquireTime:     <unset>
  RenewTime:       Fri, 19 Feb 2021 16:46:32 +0000
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 19 Feb 2021 16:46:23 +0000   Fri, 19 Feb 2021 16:45:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 19 Feb 2021 16:46:23 +0000   Fri, 19 Feb 2021 16:45:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 19 Feb 2021 16:46:23 +0000   Fri, 19 Feb 2021 16:45:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Fri, 19 Feb 2021 16:46:23 +0000   Fri, 19 Feb 2021 16:45:30 +0000   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.49.2
  Hostname:    minikube
Capacity:
  cpu:                8
  ephemeral-storage:  122818200Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16258876Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  122818200Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             16258876Ki
  pods:               110
System Info:
  Machine ID:                 f0bd2f0c29a34217af1a2ba1ab8f4fde
  System UUID:                c64905bd-f55f-4b66-a516-a160c9ab13f0
  Boot ID:                    429348d6-e3de-45cb-8f7e-93a3a486cef3
  Kernel Version:             5.4.0-65-generic
  OS Image:                   Ubuntu 20.04.1 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.2
  Kubelet Version:            v1.18.15
  Kube-Proxy Version:         v1.18.15
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (9 in total)
  Namespace                   Name                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                                ------------  ----------  ---------------  -------------  ---
  default                     hello-world-app-5f5d8b66bb-wnstl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
  kube-system                 coredns-66bff467f8-pmhsk            100m (1%)     0 (0%)      70Mi (0%)        170Mi (1%)     69s
  kube-system                 etcd-minikube                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
  kube-system                 kube-apiserver-minikube             250m (3%)     0 (0%)      0 (0%)           0 (0%)         76s
  kube-system                 kube-controller-manager-minikube    200m (2%)     0 (0%)      0 (0%)           0 (0%)         76s
  kube-system                 kube-ingress-dns-minikube           0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
  kube-system                 kube-proxy-psh2t                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
  kube-system                 kube-scheduler-minikube             100m (1%)     0 (0%)      0 (0%)           0 (0%)         76s
  kube-system                 storage-provisioner                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                650m (8%)  0 (0%)
  memory             70Mi (0%)  170Mi (1%)
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-1Gi      0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)
Events:
  Type    Reason                   Age                From        Message
  ----    ------                   ----               ----        -------
  Normal  NodeHasSufficientMemory  97s (x4 over 97s)  kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    97s (x5 over 97s)  kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     97s (x4 over 97s)  kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal  Starting                 77s                kubelet     Starting kubelet.
  Normal  NodeHasSufficientMemory  77s                kubelet     Node minikube status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    77s                kubelet     Node minikube status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     77s                kubelet     Node minikube status is now: NodeHasSufficientPID
  Normal  NodeNotReady             77s                kubelet     Node minikube status is now: NodeNotReady
  Normal  NodeAllocatableEnforced  77s                kubelet     Updated Node Allocatable limit across pods
  Normal  NodeReady                69s                kubelet     Node minikube status is now: NodeReady
  Normal  Starting                 68s                kube-proxy  Starting kube-proxy.

==> dmesg <==
[Feb18 23:56] mce: CPU0: Core temperature above threshold, cpu clock throttled (total events = 105)
[  +0.000001] mce: CPU4: Core temperature above threshold, cpu clock throttled (total events = 105)
[  +0.000024] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 232)
[  +0.000001] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 232)
[  +0.000005] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 232)
[  +0.000001] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 232)
[  +0.000001] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 232)
[  +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 232)
[  +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 232)
[  +0.000001] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 232)
[Feb19 00:08] mce: CPU2: Core temperature above threshold, cpu clock throttled (total events = 90)
[  +0.000025] mce: CPU6: Core temperature above threshold, cpu clock throttled (total events = 90)
[  +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 253)
[  +0.000000] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 253)
[  +0.000005] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 253)
[  +0.000001] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 253)
[  +0.000001] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 253)
[  +0.000001] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 253)
[  +0.000001] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 253)
[  +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 253)
[Feb19 00:16] mce: CPU0: Core temperature above threshold, cpu clock throttled (total events = 119)
[  +0.000001] mce: CPU4: Core temperature above threshold, cpu clock throttled (total events = 119)
[  +0.000001] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000001] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000034] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000002] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000000] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000002] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 255)
[  +0.000000] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 255)
[Feb19 01:22] mce: CPU6: Core temperature above threshold, cpu clock throttled (total events = 92)
[  +0.000001] mce: CPU2: Core temperature above threshold, cpu clock throttled (total events = 92)
[  +0.000001] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 256)
[  +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 256)
[  +0.000003] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 256)
[  +0.000001] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 256)
[  +0.000001] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 256)
[  +0.000001] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 256)
[  +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 256)
[  +0.000001] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 256)
[Feb19 02:54] mce: CPU6: Core temperature above threshold, cpu clock throttled (total events = 93)
[  +0.000022] mce: CPU2: Core temperature above threshold, cpu clock throttled (total events = 93)
[  +0.000002] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 257)
[  +0.000001] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 257)
[  +0.000000] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 257)
[  +0.000002] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 257)
[  +0.000001] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 257)
[  +0.000001] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 257)
[  +0.000002] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 257)
[  +0.000000] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 257)
[Feb19 04:25] mce: CPU4: Core temperature above threshold, cpu clock throttled (total events = 274)
[  +0.000001] mce: CPU0: Core temperature above threshold, cpu clock throttled (total events = 274)
[  +0.000024] mce: CPU4: Package temperature above threshold, cpu clock throttled (total events = 489)
[  +0.000001] mce: CPU0: Package temperature above threshold, cpu clock throttled (total events = 489)
[  +0.000004] mce: CPU5: Package temperature above threshold, cpu clock throttled (total events = 489)
[  +0.000001] mce: CPU1: Package temperature above threshold, cpu clock throttled (total events = 489)
[  +0.000001] mce: CPU6: Package temperature above threshold, cpu clock throttled (total events = 489)
[  +0.000001] mce: CPU2: Package temperature above threshold, cpu clock throttled (total events = 489)
[  +0.000001] mce: CPU7: Package temperature above threshold, cpu clock throttled (total events = 489)
[  +0.000001] mce: CPU3: Package temperature above threshold, cpu clock throttled (total events = 489)

==> etcd [e9cbacb66c5c] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2021-02-19 16:45:04.277121 I | etcdmain: etcd Version: 3.4.3
2021-02-19 16:45:04.277186 I | etcdmain: Git SHA: 3cf2f69b5
2021-02-19 16:45:04.277195 I | etcdmain: Go Version: go1.12.12
2021-02-19 16:45:04.277209 I | etcdmain: Go OS/Arch: linux/amd64
2021-02-19 16:45:04.277216 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2021-02-19 16:45:04.277357 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2021-02-19 16:45:04.278590 I | embed: name = minikube
2021-02-19 16:45:04.278620 I | embed: data dir = /var/lib/minikube/etcd
2021-02-19 16:45:04.278629 I | embed: member dir = /var/lib/minikube/etcd/member
2021-02-19 16:45:04.278636 I | embed: heartbeat = 100ms
2021-02-19 16:45:04.278643 I | embed: election = 1000ms
2021-02-19 16:45:04.278650 I | embed: snapshot count = 10000
2021-02-19 16:45:04.278666 I | embed: advertise client URLs = https://192.168.49.2:2379
2021-02-19 16:45:04.295380 I | etcdserver: starting member aec36adc501070cc in cluster fa54960ea34d58be
raft2021/02/19 16:45:04 INFO: aec36adc501070cc switched to configuration voters=()
raft2021/02/19 16:45:04 INFO: aec36adc501070cc became follower at term 0
raft2021/02/19 16:45:04 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2021/02/19 16:45:04 INFO: aec36adc501070cc became follower at term 1
raft2021/02/19 16:45:04 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
2021-02-19 16:45:04.305059 W | auth: simple token is not cryptographically signed
2021-02-19 16:45:04.310984 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2021-02-19 16:45:04.376187 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
raft2021/02/19 16:45:04 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
2021-02-19 16:45:04.377828 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
2021-02-19 16:45:04.381672 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2021-02-19 16:45:04.381732 I | embed: listening for peers on 192.168.49.2:2380
2021-02-19 16:45:04.382106 I | embed: listening for metrics on http://127.0.0.1:2381
raft2021/02/19 16:45:04 INFO: aec36adc501070cc is starting a new election at term 1
raft2021/02/19 16:45:04 INFO: aec36adc501070cc became candidate at term 2
raft2021/02/19 16:45:04 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
raft2021/02/19 16:45:04 INFO: aec36adc501070cc became leader at term 2
raft2021/02/19 16:45:04 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
2021-02-19 16:45:04.497581 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
2021-02-19 16:45:04.497829 I | embed: ready to serve client requests
2021-02-19 16:45:04.497880 I | embed: ready to serve client requests
2021-02-19 16:45:04.498650 I | etcdserver: setting up the initial cluster version to 3.4
2021-02-19 16:45:04.506290 N | etcdserver/membership: set the initial cluster version to 3.4
2021-02-19 16:45:04.509503 I | etcdserver/api: enabled capabilities for version 3.4
2021-02-19 16:45:04.511205 I | embed: serving client requests on 127.0.0.1:2379
2021-02-19 16:45:04.514004 I | embed: serving client requests on 192.168.49.2:2379
2021-02-19 16:45:29.893388 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-ingress-dns-minikube.166533eac9398e58\" " with result "range_response_count:1 size:832" took too long (186.48882ms) to execute

==> kernel <==
 16:46:40 up 1 day,  8:17,  0 users,  load average: 2.68, 2.61, 1.99
Linux minikube 5.4.0-65-generic #73-Ubuntu SMP Mon Jan 18 17:25:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.1 LTS"

==> kube-apiserver [d62b94b18f37] <==
I0219 16:45:56.083979       1 log.go:172] http: TLS handshake error from 192.168.49.1:57874: remote error: tls: bad certificate
I0219 16:45:56.086680       1 log.go:172] http: TLS handshake error from 192.168.49.1:57870: remote error: tls: bad certificate
I0219 16:45:56.088639       1 log.go:172] http: TLS handshake error from 192.168.49.1:57876: remote error: tls: bad certificate
I0219 16:46:00.904289       1 log.go:172] http: TLS handshake error from 192.168.49.1:57920: remote error: tls: bad certificate
I0219 16:46:01.004714       1 log.go:172] http: TLS handshake error from 192.168.49.1:57926: remote error: tls: bad certificate
I0219 16:46:01.105457       1 log.go:172] http: TLS handshake error from 192.168.49.1:57932: remote error: tls: bad certificate
I0219 16:46:01.113256       1 log.go:172] http: TLS handshake error from 192.168.49.1:57940: remote error: tls: bad certificate
I0219 16:46:01.123547       1 log.go:172] http: TLS handshake error from 192.168.49.1:57952: remote error: tls: bad certificate
I0219 16:46:01.124622       1 log.go:172] http: TLS handshake error from 192.168.49.1:57954: remote error: tls: bad certificate
I0219 16:46:01.125536       1 log.go:172] http: TLS handshake error from 192.168.49.1:57956: remote error: tls: bad certificate
I0219 16:46:05.926093       1 log.go:172] http: TLS handshake error from 192.168.49.1:57978: remote error: tls: bad certificate
I0219 16:46:06.025269       1 log.go:172] http: TLS handshake error from 192.168.49.1:57984: remote error: tls: bad certificate
I0219 16:46:06.132421       1 log.go:172] http: TLS handshake error from 192.168.49.1:57990: remote error: tls: bad certificate
I0219 16:46:06.140848       1 log.go:172] http: TLS handshake error from 192.168.49.1:57998: remote error: tls: bad certificate
I0219 16:46:06.149013       1 log.go:172] http: TLS handshake error from 192.168.49.1:58012: remote error: tls: bad certificate
I0219 16:46:06.151091       1 log.go:172] http: TLS handshake error from 192.168.49.1:58010: remote error: tls: bad certificate
I0219 16:46:06.151600       1 log.go:172] http: TLS handshake error from 192.168.49.1:58016: remote error: tls: bad certificate
I0219 16:46:10.945679       1 log.go:172] http: TLS handshake error from 192.168.49.1:58042: remote error: tls: bad certificate
I0219 16:46:11.046062       1 log.go:172] http: TLS handshake error from 192.168.49.1:58048: remote error: tls: bad certificate
I0219 16:46:11.159939       1 log.go:172] http: TLS handshake error from 192.168.49.1:58054: remote error: tls: bad certificate
I0219 16:46:11.165243       1 log.go:172] http: TLS handshake error from 192.168.49.1:58060: remote error: tls: bad certificate
I0219 16:46:11.173019       1 log.go:172] http: TLS handshake error from 192.168.49.1:58070: remote error: tls: bad certificate
I0219 16:46:11.175174       1 log.go:172] http: TLS handshake error from 192.168.49.1:58076: remote error: tls: bad certificate
I0219 16:46:11.175737       1 log.go:172] http: TLS handshake error from 192.168.49.1:58078: remote error: tls: bad certificate
I0219 16:46:15.965698       1 log.go:172] http: TLS handshake error from 192.168.49.1:58100: remote error: tls: bad certificate
I0219 16:46:16.065713       1 log.go:172] http: TLS handshake error from 192.168.49.1:58106: remote error: tls: bad certificate
I0219 16:46:16.175304       1 log.go:172] http: TLS handshake error from 192.168.49.1:58112: remote error: tls: bad certificate
I0219 16:46:16.180157       1 log.go:172] http: TLS handshake error from 192.168.49.1:58118: remote error: tls: bad certificate
I0219 16:46:16.186882       1 log.go:172] http: TLS handshake error from 192.168.49.1:58124: remote error: tls: bad certificate
I0219 16:46:16.192098       1 log.go:172] http: TLS handshake error from 192.168.49.1:58136: remote error: tls: bad certificate
I0219 16:46:16.192783       1 log.go:172] http: TLS handshake error from 192.168.49.1:58132: remote error: tls: bad certificate
I0219 16:46:20.984174       1 log.go:172] http: TLS handshake error from 192.168.49.1:58156: remote error: tls: bad certificate
I0219 16:46:21.086230       1 log.go:172] http: TLS handshake error from 192.168.49.1:58162: remote error: tls: bad certificate
I0219 16:46:21.194091       1 log.go:172] http: TLS handshake error from 192.168.49.1:58168: remote error: tls: bad certificate
I0219 16:46:21.202961       1 log.go:172] http: TLS handshake error from 192.168.49.1:58174: remote error: tls: bad certificate
I0219 16:46:21.208174       1 log.go:172] http: TLS handshake error from 192.168.49.1:58180: remote error: tls: bad certificate
I0219 16:46:21.213782       1 log.go:172] http: TLS handshake error from 192.168.49.1:58190: remote error: tls: bad certificate
I0219 16:46:21.214928       1 log.go:172] http: TLS handshake error from 192.168.49.1:58192: remote error: tls: bad certificate
I0219 16:46:21.355531       1 log.go:172] http: TLS handshake error from 192.168.49.1:58200: remote error: tls: bad certificate
I0219 16:46:26.004725       1 log.go:172] http: TLS handshake error from 192.168.49.1:58220: remote error: tls: bad certificate
I0219 16:46:26.105586       1 log.go:172] http: TLS handshake error from 192.168.49.1:58226: remote error: tls: bad certificate
I0219 16:46:26.214838       1 log.go:172] http: TLS handshake error from 192.168.49.1:58232: remote error: tls: bad certificate
I0219 16:46:26.226991       1 log.go:172] http: TLS handshake error from 192.168.49.1:58240: remote error: tls: bad certificate
I0219 16:46:26.232008       1 log.go:172] http: TLS handshake error from 192.168.49.1:58246: remote error: tls: bad certificate
I0219 16:46:26.235222       1 log.go:172] http: TLS handshake error from 192.168.49.1:58258: remote error: tls: bad certificate
I0219 16:46:26.235374       1 log.go:172] http: TLS handshake error from 192.168.49.1:58256: remote error: tls: bad certificate
I0219 16:46:31.014060       1 log.go:172] http: TLS handshake error from 192.168.49.1:58276: remote error: tls: bad certificate
I0219 16:46:31.114435       1 log.go:172] http: TLS handshake error from 192.168.49.1:58282: remote error: tls: bad certificate
I0219 16:46:31.236166       1 log.go:172] http: TLS handshake error from 192.168.49.1:58290: remote error: tls: bad certificate
I0219 16:46:31.250336       1 log.go:172] http: TLS handshake error from 192.168.49.1:58298: remote error: tls: bad certificate
I0219 16:46:31.252672       1 log.go:172] http: TLS handshake error from 192.168.49.1:58302: remote error: tls: bad certificate
I0219 16:46:31.256951       1 log.go:172] http: TLS handshake error from 192.168.49.1:58314: remote error: tls: bad certificate
I0219 16:46:31.257163       1 log.go:172] http: TLS handshake error from 192.168.49.1:58312: remote error: tls: bad certificate
I0219 16:46:36.030070       1 log.go:172] http: TLS handshake error from 192.168.49.1:58334: remote error: tls: bad certificate
I0219 16:46:36.131616       1 log.go:172] http: TLS handshake error from 192.168.49.1:58340: remote error: tls: bad certificate
I0219 16:46:36.259715       1 log.go:172] http: TLS handshake error from 192.168.49.1:58348: remote error: tls: bad certificate
I0219 16:46:36.278890       1 log.go:172] http: TLS handshake error from 192.168.49.1:58358: remote error: tls: bad certificate
I0219 16:46:36.281915       1 log.go:172] http: TLS handshake error from 192.168.49.1:58360: remote error: tls: bad certificate
I0219 16:46:36.283122       1 log.go:172] http: TLS handshake error from 192.168.49.1:58370: remote error: tls: bad certificate
I0219 16:46:36.283305       1 log.go:172] http: TLS handshake error from 192.168.49.1:58372: remote error: tls: bad certificate

==> kube-controller-manager [48fef2e22fc9] <==
I0219 16:45:29.136380       1 node_lifecycle_controller.go:546] Starting node controller
I0219 16:45:29.136410       1 shared_informer.go:223] Waiting for caches to sync for taint
I0219 16:45:29.386181       1 controllermanager.go:533] Started "pv-protection"
I0219 16:45:29.387392       1 pv_protection_controller.go:83] Starting PV protection controller
I0219 16:45:29.387435       1 shared_informer.go:223] Waiting for caches to sync for PV protection
I0219 16:45:29.388019       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0219 16:45:29.391990       1 shared_informer.go:223] Waiting for caches to sync for resource quota
W0219 16:45:29.405173       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0219 16:45:29.425068       1 shared_informer.go:230] Caches are synced for GC 
I0219 16:45:29.436652       1 shared_informer.go:230] Caches are synced for taint 
I0219 16:45:29.436678       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
I0219 16:45:29.436789       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
I0219 16:45:29.436965       1 taint_manager.go:187] Starting NoExecuteTaintManager
I0219 16:45:29.437005       1 shared_informer.go:230] Caches are synced for ReplicationController 
W0219 16:45:29.437054       1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0219 16:45:29.437166       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0219 16:45:29.437407       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"8521cf96-2241-4585-ab5b-7efc18e788a6", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0219 16:45:29.438314       1 shared_informer.go:230] Caches are synced for service account 
I0219 16:45:29.481261       1 shared_informer.go:230] Caches are synced for daemon sets 
I0219 16:45:29.482150       1 shared_informer.go:230] Caches are synced for node 
I0219 16:45:29.482217       1 range_allocator.go:172] Starting range CIDR allocator
I0219 16:45:29.482236       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
I0219 16:45:29.482254       1 shared_informer.go:230] Caches are synced for cidrallocator 
I0219 16:45:29.487055       1 shared_informer.go:230] Caches are synced for job 
I0219 16:45:29.487693       1 shared_informer.go:230] Caches are synced for PV protection 
I0219 16:45:29.488296       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
I0219 16:45:29.488485       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
I0219 16:45:29.492490       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
I0219 16:45:29.493107       1 shared_informer.go:230] Caches are synced for endpoint 
I0219 16:45:29.502290       1 shared_informer.go:230] Caches are synced for HPA 
I0219 16:45:29.502407       1 shared_informer.go:230] Caches are synced for persistent volume 
I0219 16:45:29.502501       1 shared_informer.go:230] Caches are synced for TTL 
I0219 16:45:29.519164       1 shared_informer.go:230] Caches are synced for expand 
I0219 16:45:29.576049       1 shared_informer.go:230] Caches are synced for namespace 
I0219 16:45:29.587059       1 shared_informer.go:230] Caches are synced for PVC protection 
I0219 16:45:29.589402       1 shared_informer.go:230] Caches are synced for stateful set 
I0219 16:45:29.592123       1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
I0219 16:45:29.682331       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"c16ada44-4172-4d50-ac67-cbe541f9e656", APIVersion:"apps/v1", ResourceVersion:"211", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-psh2t
E0219 16:45:29.692599       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I0219 16:45:29.777327       1 shared_informer.go:230] Caches are synced for endpoint_slice 
I0219 16:45:29.876125       1 shared_informer.go:230] Caches are synced for attach detach 
E0219 16:45:29.902219       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0219 16:45:29.998229       1 shared_informer.go:230] Caches are synced for resource quota 
E0219 16:45:30.001147       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"c16ada44-4172-4d50-ac67-cbe541f9e656", ResourceVersion:"211", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63749349914, loc:(*time.Location)(0x6cfe2e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00181caa0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00181cac0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00181cae0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0009733c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00181cb00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00181cb20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.15", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00181cb60)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0012d8e10), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00063edd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000137420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0000d4a98)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00063ee58)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0219 16:45:30.075238       1 shared_informer.go:230] Caches are synced for ReplicaSet 
I0219 16:45:30.075343       1 shared_informer.go:230] Caches are synced for deployment 
I0219 16:45:30.075401       1 shared_informer.go:230] Caches are synced for garbage collector 
I0219 16:45:30.075433       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
E0219 16:45:30.078177       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0219 16:45:30.079604       1 shared_informer.go:230] Caches are synced for disruption 
I0219 16:45:30.079641       1 disruption.go:339] Sending events to api server.
I0219 16:45:30.084506       1 shared_informer.go:230] Caches are synced for resource quota 
I0219 16:45:30.088113       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"b58564e6-4271-4c45-b757-ffd10806a503", APIVersion:"apps/v1", ResourceVersion:"247", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
I0219 16:45:30.088531       1 shared_informer.go:230] Caches are synced for garbage collector 
I0219 16:45:30.101906       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6c3cbe50-b0df-4ef8-a9d3-837a55eddccc", APIVersion:"apps/v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-pmhsk
I0219 16:45:34.501534       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0219 16:45:34.502435       1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"coredns-66bff467f8-pmhsk", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod kube-system/coredns-66bff467f8-pmhsk
I0219 16:45:34.502499       1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-ingress-dns-minikube", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod kube-system/kube-ingress-dns-minikube
I0219 16:45:36.545045       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"e6f9953e-93b8-45e6-a5fc-92f087a5eefe", APIVersion:"apps/v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
I0219 16:45:36.548874       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"7d7f23a7-156c-4f3b-9e6f-fbc2f2c20eff", APIVersion:"apps/v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-wnstl

==> kube-proxy [d37644ef3654] <==
W0219 16:45:31.023007       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0219 16:45:31.029984       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
I0219 16:45:31.030012       1 server_others.go:186] Using iptables Proxier.
I0219 16:45:31.030309       1 server.go:583] Version: v1.18.15
I0219 16:45:31.030661       1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0219 16:45:31.030831       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0219 16:45:31.030915       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0219 16:45:31.031168       1 config.go:133] Starting endpoints config controller
I0219 16:45:31.031232       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0219 16:45:31.031190       1 config.go:315] Starting service config controller
I0219 16:45:31.031328       1 shared_informer.go:223] Waiting for caches to sync for service config
I0219 16:45:31.131505       1 shared_informer.go:230] Caches are synced for endpoints config 
I0219 16:45:31.131534       1 shared_informer.go:230] Caches are synced for service config 

==> kube-scheduler [279df1772a35] <==
I0219 16:45:04.481046       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0219 16:45:04.481194       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0219 16:45:05.715069       1 serving.go:313] Generated self-signed cert in-memory
W0219 16:45:10.711221       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0219 16:45:10.711251       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0219 16:45:10.711264       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0219 16:45:10.711271       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0219 16:45:10.880290       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0219 16:45:10.880392       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0219 16:45:10.889956       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0219 16:45:10.891980       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0219 16:45:10.892035       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0219 16:45:10.892062       1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0219 16:45:10.981266       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0219 16:45:10.981717       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0219 16:45:10.981720       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0219 16:45:10.984590       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0219 16:45:10.985135       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0219 16:45:10.985475       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0219 16:45:10.985515       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0219 16:45:10.985861       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0219 16:45:10.986020       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0219 16:45:10.987030       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0219 16:45:10.987046       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0219 16:45:10.987429       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0219 16:45:11.825276       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0219 16:45:11.879460       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0219 16:45:11.902467       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0219 16:45:11.977710       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0219 16:45:11.977986       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0219 16:45:11.978913       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0219 16:45:12.180616       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0219 16:45:12.276094       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0219 16:45:12.277723       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
I0219 16:45:14.892273       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
E0219 16:45:30.275839       1 factory.go:503] pod: kube-system/coredns-66bff467f8-pmhsk is already present in unschedulable queue

==> kubelet <==
-- Logs begin at Fri 2021-02-19 16:44:45 UTC, end at Fri 2021-02-19 16:46:40 UTC. --
Feb 19 16:45:22 minikube kubelet[2448]: I0219 16:45:22.432260    2448 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Feb 19 16:45:22 minikube kubelet[2448]: E0219 16:45:22.459100    2448 kubelet.go:1848] skipping pod synchronization - container runtime status check may not have completed yet
Feb 19 16:45:22 minikube kubelet[2448]: I0219 16:45:22.495967    2448 kubelet_node_status.go:70] Attempting to register node minikube
Feb 19 16:45:22 minikube kubelet[2448]: I0219 16:45:22.513461    2448 kubelet_node_status.go:112] Node minikube was previously registered
Feb 19 16:45:22 minikube kubelet[2448]: I0219 16:45:22.513648    2448 kubelet_node_status.go:73] Successfully registered node minikube
Feb 19 16:45:22 minikube kubelet[2448]: I0219 16:45:22.641418    2448 setters.go:559] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2021-02-19 16:45:22.641371158 +0000 UTC m=+8.133763221 LastTransitionTime:2021-02-19 16:45:22.641371158 +0000 UTC m=+8.133763221 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
Feb 19 16:45:22 minikube kubelet[2448]: E0219 16:45:22.675430    2448 kubelet.go:1848] skipping pod synchronization - container runtime status check may not have completed yet
Feb 19 16:45:22 minikube kubelet[2448]: I0219 16:45:22.699937    2448 cpu_manager.go:184] [cpumanager] starting with none policy
Feb 19 16:45:22 minikube kubelet[2448]: I0219 16:45:22.699982    2448 cpu_manager.go:185] [cpumanager] reconciling every 10s
Feb 19 16:45:22 minikube kubelet[2448]: I0219 16:45:22.700027    2448 state_mem.go:36] [cpumanager] initializing new in-memory state store
Feb 19 16:45:22 minikube kubelet[2448]: I0219 16:45:22.700704    2448 state_mem.go:88] [cpumanager] updated default cpuset: ""
Feb 19 16:45:22 minikube kubelet[2448]: I0219 16:45:22.700734    2448 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Feb 19 16:45:22 minikube kubelet[2448]: I0219 16:45:22.700767    2448 policy_none.go:43] [cpumanager] none policy: Start
Feb 19 16:45:22 minikube kubelet[2448]: I0219 16:45:22.705481    2448 plugin_manager.go:114] Starting Kubelet Plugin Manager
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.075921    2448 topology_manager.go:233] [topologymanager] Topology Admit Handler
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.079017    2448 topology_manager.go:233] [topologymanager] Topology Admit Handler
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.083852    2448 topology_manager.go:233] [topologymanager] Topology Admit Handler
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.087557    2448 topology_manager.go:233] [topologymanager] Topology Admit Handler
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.183486    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/51f5b8735f880fe2fead076b8e0f1ec6-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "51f5b8735f880fe2fead076b8e0f1ec6")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.183653    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/092853f27155bc9891a742e03406fe7c-kubeconfig") pod "kube-controller-manager-minikube" (UID: "092853f27155bc9891a742e03406fe7c")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.183759    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/092853f27155bc9891a742e03406fe7c-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "092853f27155bc9891a742e03406fe7c")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.183862    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/092853f27155bc9891a742e03406fe7c-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "092853f27155bc9891a742e03406fe7c")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.183962    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/49974d2737ed8847e94327cf53a521e1-kubeconfig") pod "kube-scheduler-minikube" (UID: "49974d2737ed8847e94327cf53a521e1")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.184042    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/c1128fb721f6a89fbf3c848e4ba20f78-etcd-certs") pod "etcd-minikube" (UID: "c1128fb721f6a89fbf3c848e4ba20f78")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.184134    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/51f5b8735f880fe2fead076b8e0f1ec6-ca-certs") pod "kube-apiserver-minikube" (UID: "51f5b8735f880fe2fead076b8e0f1ec6")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.184301    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/51f5b8735f880fe2fead076b8e0f1ec6-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "51f5b8735f880fe2fead076b8e0f1ec6")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.184454    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/092853f27155bc9891a742e03406fe7c-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "092853f27155bc9891a742e03406fe7c")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.184522    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/51f5b8735f880fe2fead076b8e0f1ec6-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "51f5b8735f880fe2fead076b8e0f1ec6")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.184570    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/092853f27155bc9891a742e03406fe7c-ca-certs") pod "kube-controller-manager-minikube" (UID: "092853f27155bc9891a742e03406fe7c")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.184622    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/092853f27155bc9891a742e03406fe7c-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "092853f27155bc9891a742e03406fe7c")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.184686    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/c1128fb721f6a89fbf3c848e4ba20f78-etcd-data") pod "etcd-minikube" (UID: "c1128fb721f6a89fbf3c848e4ba20f78")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.184782    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/51f5b8735f880fe2fead076b8e0f1ec6-k8s-certs") pod "kube-apiserver-minikube" (UID: "51f5b8735f880fe2fead076b8e0f1ec6")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.184940    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/092853f27155bc9891a742e03406fe7c-k8s-certs") pod "kube-controller-manager-minikube" (UID: "092853f27155bc9891a742e03406fe7c")
Feb 19 16:45:23 minikube kubelet[2448]: I0219 16:45:23.184987    2448 reconciler.go:157] Reconciler: start to sync state
Feb 19 16:45:29 minikube kubelet[2448]: I0219 16:45:29.589612    2448 kuberuntime_manager.go:995] updating runtime config through cri with podcidr 10.244.0.0/24
Feb 19 16:45:29 minikube kubelet[2448]: I0219 16:45:29.592142    2448 docker_service.go:354] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
Feb 19 16:45:29 minikube kubelet[2448]: I0219 16:45:29.593152    2448 kubelet_network.go:77] Setting Pod CIDR:  -> 10.244.0.0/24
Feb 19 16:45:29 minikube kubelet[2448]: I0219 16:45:29.702476    2448 topology_manager.go:233] [topologymanager] Topology Admit Handler
Feb 19 16:45:29 minikube kubelet[2448]: I0219 16:45:29.788039    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-6ntx8" (UniqueName: "kubernetes.io/secret/f8c8b249-e30d-4794-903d-09f753aa42e6-kube-proxy-token-6ntx8") pod "kube-proxy-psh2t" (UID: "f8c8b249-e30d-4794-903d-09f753aa42e6")
Feb 19 16:45:29 minikube kubelet[2448]: I0219 16:45:29.790818    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/f8c8b249-e30d-4794-903d-09f753aa42e6-kube-proxy") pod "kube-proxy-psh2t" (UID: "f8c8b249-e30d-4794-903d-09f753aa42e6")
Feb 19 16:45:29 minikube kubelet[2448]: I0219 16:45:29.791415    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/f8c8b249-e30d-4794-903d-09f753aa42e6-lib-modules") pod "kube-proxy-psh2t" (UID: "f8c8b249-e30d-4794-903d-09f753aa42e6")
Feb 19 16:45:29 minikube kubelet[2448]: I0219 16:45:29.791794    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/f8c8b249-e30d-4794-903d-09f753aa42e6-xtables-lock") pod "kube-proxy-psh2t" (UID: "f8c8b249-e30d-4794-903d-09f753aa42e6")
Feb 19 16:45:30 minikube kubelet[2448]: W0219 16:45:30.659765    2448 pod_container_deletor.go:77] Container "4bb86abec586aa546d211e7804a67a58fce1263ea8305d848d93e5e2b09d1356" not found in pod's containers
Feb 19 16:45:31 minikube kubelet[2448]: I0219 16:45:31.204674    2448 topology_manager.go:233] [topologymanager] Topology Admit Handler
Feb 19 16:45:31 minikube kubelet[2448]: I0219 16:45:31.302594    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/16abb87b-84fc-4361-be1e-b75db724cc39-config-volume") pod "coredns-66bff467f8-pmhsk" (UID: "16abb87b-84fc-4361-be1e-b75db724cc39")
Feb 19 16:45:31 minikube kubelet[2448]: I0219 16:45:31.302639    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-gfw9z" (UniqueName: "kubernetes.io/secret/16abb87b-84fc-4361-be1e-b75db724cc39-coredns-token-gfw9z") pod "coredns-66bff467f8-pmhsk" (UID: "16abb87b-84fc-4361-be1e-b75db724cc39")
Feb 19 16:45:32 minikube kubelet[2448]: W0219 16:45:32.051825    2448 pod_container_deletor.go:77] Container "c309db2398d748948f5f5c8f0767ab21e68dfa0cdad1d4086f52ef17891fc782" not found in pod's containers
Feb 19 16:45:32 minikube kubelet[2448]: W0219 16:45:32.052996    2448 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-pmhsk through plugin: invalid network status for
Feb 19 16:45:33 minikube kubelet[2448]: W0219 16:45:33.067869    2448 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-pmhsk through plugin: invalid network status for
Feb 19 16:45:34 minikube kubelet[2448]: I0219 16:45:34.204916    2448 topology_manager.go:233] [topologymanager] Topology Admit Handler
Feb 19 16:45:34 minikube kubelet[2448]: I0219 16:45:34.216628    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-7dhj6" (UniqueName: "kubernetes.io/secret/14c31b06-4be8-4e74-9158-f19a6e3a9b2a-minikube-ingress-dns-token-7dhj6") pod "kube-ingress-dns-minikube" (UID: "14c31b06-4be8-4e74-9158-f19a6e3a9b2a")
Feb 19 16:45:36 minikube kubelet[2448]: I0219 16:45:36.552804    2448 topology_manager.go:233] [topologymanager] Topology Admit Handler
Feb 19 16:45:36 minikube kubelet[2448]: I0219 16:45:36.627458    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-kzqx4" (UniqueName: "kubernetes.io/secret/8e62d26b-8f77-499a-802f-46102d67e0b9-default-token-kzqx4") pod "hello-world-app-5f5d8b66bb-wnstl" (UID: "8e62d26b-8f77-499a-802f-46102d67e0b9")
Feb 19 16:45:37 minikube kubelet[2448]: W0219 16:45:37.225425    2448 pod_container_deletor.go:77] Container "cff493632431425616ed10d8786ee3f776366365ae8d3a2b8e09ad8c401d8bd5" not found in pod's containers
Feb 19 16:45:37 minikube kubelet[2448]: W0219 16:45:37.225441    2448 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-wnstl through plugin: invalid network status for
Feb 19 16:45:38 minikube kubelet[2448]: I0219 16:45:38.202174    2448 topology_manager.go:233] [topologymanager] Topology Admit Handler
Feb 19 16:45:38 minikube kubelet[2448]: W0219 16:45:38.230706    2448 docker_sandbox.go:400] **failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-wnstl through plugin: invalid network status for**
Feb 19 16:45:38 minikube kubelet[2448]: I0219 16:45:38.231443    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/ea461256-57f2-4693-aacb-a532dc3051ec-tmp") pod "storage-provisioner" (UID: "ea461256-57f2-4693-aacb-a532dc3051ec")
Feb 19 16:45:38 minikube kubelet[2448]: I0219 16:45:38.231680    2448 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-cbvdz" (UniqueName: "kubernetes.io/secret/ea461256-57f2-4693-aacb-a532dc3051ec-storage-provisioner-token-cbvdz") pod "storage-provisioner" (UID: "ea461256-57f2-4693-aacb-a532dc3051ec")
Feb 19 16:46:00 minikube kubelet[2448]: W0219 16:46:00.573372    2448 docker_sandbox.go:400] **failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-wnstl through plugin: invalid network status for**

==> storage-provisioner [0f819bcf55d4] <==
I0219 16:45:38.856306       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
I0219 16:45:38.863006       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
I0219 16:45:38.863040       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
I0219 16:45:38.867243       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0219 16:45:38.867392       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa64a00f-1c2c-45da-b0fb-95736362a24a", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_71e4f1f0-8edb-4a91-aade-c450eaeb8ef7 became leader
I0219 16:45:38.867536       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_71e4f1f0-8edb-4a91-aade-c450eaeb8ef7!
I0219 16:45:38.967870       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_71e4f1f0-8edb-4a91-aade-c450eaeb8ef7!

</details>
reinvantveer commented 3 years ago

Just tried: fiddling with the Network Manager doesn't help unfortunately:

❯ echo "server=/test/$(minikube ip)" | sudo tee /etc/NetworkManager/dnsmasq.d/minikube.conf
server=/test/192.168.49.2
❯ sudo systemctl restart NetworkManager.service
❯ ping hello-john.test
ping: hello-john.test: Name or service not known
seanrmurphy commented 3 years ago

I've been looking at something similar this morning and managed to get the DNS resolution working (I think) but there is an issue with the port config as far as I can see - basically ping works but curl does not.

I'm on Ubuntu Focal, using minikube 1.17.1 with ingress-dns - I'm using systemd-resolved.

My approach was the following:

Contents of /etc/resolv.conf

ubuntu@test:/etc$ minikube ip
192.168.49.2
ubuntu@test:/etc$ cat /etc/resolv.conf 
nameserver 127.0.0.53
options edns0 trust-ad
search openstacklocal

search test
nameserver 192.168.49.2 
timeout 5

I was then able to install the test application and do the following:

ubuntu@test:/etc$ nslookup hello-jane.test
Server:     127.0.0.53
Address:    127.0.0.53#53

Non-authoritative answer:
Name:   hello-jane.test
Address: 192.168.49.2

ubuntu@test:/etc$ ping hello-jane.test
PING hello-jane.test (192.168.49.2) 56(84) bytes of data.
64 bytes from 192.168.49.2 (192.168.49.2): icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from 192.168.49.2 (192.168.49.2): icmp_seq=2 ttl=64 time=0.096 ms
^C
--- hello-jane.test ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.078/0.087/0.096/0.009 ms
ubuntu@test:/etc$ curl hello-jane.test
curl: (7) Failed to connect to hello-jane.test port 80: Connection refused
ubuntu@test:/etc$ 
seanrmurphy commented 3 years ago

It seems I also needed to enable the ingress addon - with this, the example works in my environment.

ubuntu@test:~$ curl hello-jane.test
Hello, world!
Version: 1.0.0
Hostname: hello-world-app-7b9bf45d65-gvrtk
ubuntu@test:~$ 
reinvantveer commented 3 years ago

Sorry @seanrmurphy I can't reproduce your steps, I'm on Focal as well. Could I trouble you for the commands you issued? Here's my attempt:

echo "Exposing apps to host"

LOCAL_DEV_DOMAIN=test
sudo service systemd-resolved stop
sudo rm /etc/resolv.conf

echo Adding ${LOCAL_DEV_DOMAIN} host to /etc/resolv.conf

echo nameserver 127.0.0.53 | sudo tee -a /etc/resolv.conf
echo options edns0 trust-ad | sudo tee -a /etc/resolv.conf
echo search openstacklocal | sudo tee -a /etc/resolv.conf
echo "" | sudo tee -a /etc/resolv.conf

echo "search ${LOCAL_DEV_DOMAIN}" | sudo tee -a /etc/resolv.conf
echo "nameserver $(minikube ip)" | sudo tee -a /etc/resolv.conf
echo "timeout 5" | sudo tee -a /etc/resolv.conf

sudo ln -nsf /run/systemd/resolve/resolv.conf /etc/resolv.conf
sudo service systemd-resolved start

echo deploying test application to verify ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/minikube/8209421c3e5ef0af4ca92a30da25b5e75c9255bc/deploy/addons/ingress-dns/example/example.yaml # Use a k8s 1.18 beta networking version
ping hello-john.test
kubectl delete -f https://raw.githubusercontent.com/kubernetes/minikube/8209421c3e5ef0af4ca92a30da25b5e75c9255bc/deploy/addons/ingress-dns/example/example.yaml

But this fails on ping hello-john.test, because when I list /etc/resolv.conf it just says:

# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 192.168.1.1

So on a sudo service systemd-resolved start it overwrites the configuration I just wrote. Even if I remove both... PS I tried variations of

sudo systemctl disable systemd-resolved
sudo systemctl stop systemd-resolved

followed by

sudo systemctl restart NetworkManager
sudo systemctl start systemd-resolved
sudo systemctl enable systemd-resolved

but it just keeps the nameserver 192.168.1.1

seanrmurphy commented 3 years ago

Sure - this worked for me:

ubuntu@deploy:~$ minikube start
😄  minikube v1.17.1 on Ubuntu 20.04 (amd64)
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
ubuntu@deploy:~$ cd /etc/
ubuntu@deploy:/etc$ head resolv.conf 
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
ubuntu@deploy:/etc$ sudo systemctl stop systemd-resolved
ubuntu@deploy:/etc$ sudo rm resolv.conf
sudo: unable to resolve host deploy: Temporary failure in name resolution
ubuntu@deploy:/etc$ minikube ip
192.168.49.2
ubuntu@deploy:/etc$ sudo vi resolv.conf
sudo: unable to resolve host deploy: Temporary failure in name resolution
ubuntu@deploy:/etc$ cat resolv.conf 
nameserver 127.0.0.53
options edns0 trust-ad
search openstacklocal

search test
nameserver 192.168.49.2 
timeout 5

ubuntu@deploy:/etc$ sudo rm /run/systemd/resolve/resolv.conf
sudo: unable to resolve host deploy: Temporary failure in name resolution
ubuntu@deploy:/etc$ sudo ln -s /etc/resolv.conf /run/systemd/resolve/resolv.conf
sudo: unable to resolve host deploy: Temporary failure in name resolution
ubuntu@deploy:/etc$ sudo systemctl restart systemd-resolved
sudo: unable to resolve host deploy: Temporary failure in name resolution
ubuntu@deploy:/etc$ ping www.google.com
PING www.google.com (172.217.168.36) 56(84) bytes of data.
64 bytes from zrh04s14-in-f4.1e100.net (172.217.168.36): icmp_seq=1 ttl=112 time=3.93 ms
C64 bytes from zrh04s14-in-f4.1e100.net (172.217.168.36): icmp_seq=2 ttl=112 time=3.98 ms
64 bytes from zrh04s14-in-f4.1e100.net (172.217.168.36): icmp_seq=3 ttl=112 time=3.90 ms
^C
--- www.google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 3.899/3.935/3.978/0.032 ms
ubuntu@deploy:/etc$ minikube addons enable ingress
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled
ubuntu@deploy:/etc$ minikube addons enable ingress-dns
🌟  The 'ingress-dns' addon is enabled
ubuntu@deploy:/etc$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/ingress-dns/example/example.yaml
deployment.apps/hello-world-app created
ingress.networking.k8s.io/example-ingress created
service/hello-world-app created
service/hello-world-app created
ubuntu@deploy:/etc$ ping hello-jane.test
PING hello-jane.test (192.168.49.2) 56(84) bytes of data.
64 bytes from 192.168.49.2 (192.168.49.2): icmp_seq=1 ttl=64 time=0.068 ms
64 bytes from 192.168.49.2 (192.168.49.2): icmp_seq=2 ttl=64 time=0.078 ms
^C
--- hello-jane.test ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.068/0.073/0.078/0.005 ms
ubuntu@deploy:/etc$ ^C
ubuntu@deploy:/etc$ ^C
ubuntu@deploy:/etc$ curl hello-jane.test
Hello, world!
Version: 1.0.0
Hostname: hello-world-app-7b9bf45d65-zfcs4
ubuntu@deploy:/etc$ 

Note that some of the content of resolv.conf above is picked up from our openstack cluster (the domain openstacklocal and the trust-ad stuff - you should basically copy what is there from before and add the clause around the new domain with the minikube ip acting as the nameserver.

Of course this would only work if you were using systemd-resolved for name resolution in the first instance.

Hope that helps!

reinvantveer commented 3 years ago

Yes!!! Thanks ever so much.

ping -c 5 hello-john.geodan.dev
PING hello-john.geodan.dev (192.168.49.2) 56(84) bytes of data.
64 bytes from 192.168.49.2 (192.168.49.2): icmp_seq=1 ttl=64 time=0.141 ms
64 bytes from 192.168.49.2 (192.168.49.2): icmp_seq=2 ttl=64 time=0.090 ms
64 bytes from 192.168.49.2 (192.168.49.2): icmp_seq=3 ttl=64 time=0.088 ms
64 bytes from 192.168.49.2 (192.168.49.2): icmp_seq=4 ttl=64 time=0.096 ms
64 bytes from 192.168.49.2 (192.168.49.2): icmp_seq=5 ttl=64 time=0.087 ms

--- hello-john.geodan.dev ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4068ms
rtt min/avg/max/mdev = 0.087/0.100/0.141/0.020 ms
curl hello-john.geodan.dev
Hello, world!
Version: 1.0.0
Hostname: hello-world-app-5f5d8b66bb-67pd7

Now, for some documentation fixing of the ingress-dns plugin I guess...

reinvantveer commented 3 years ago

OK, fair warning: I rebooted my system this morning and my networking is broken, it seems I shouldn't have disabled the network manager. I need to do some more figuring out.

reinvantveer commented 3 years ago

OK, fair warning: I rebooted my system this morning and my networking is broken, it seems I shouldn't have disabled the network manager. I need to do some more figuring out.

Just restoring the network manager was enough, don't know exactly where I got the advice to disable it but it was a very bad idea :laughing:

shreyasshivakumara commented 2 years ago

Hello, I am facing the same issue as @reinvantveer while starting the ingress DNS. Is there anything wrong with the Minikube version or am I doing wrong? Seems like the /etc/resolv.conf is not updating after overwriting it.

`shreyas@ubuntu-machine:/etc$ minikube start
😄  minikube v1.23.2 on Ubuntu 20.04
✨  Automatically selected the docker driver. Other choices: none, ssh
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=7800MB) ...
🐳  Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

(base) shreyas@ubuntu-machine:/etc$ head resolv.conf 
This file is managed by man:systemd-resolved(8). Do not edit.
This is a dynamic resolv.conf file for connecting local clients directly to
all known uplink DNS servers. This file lists all configured search domains.

Third party programs must not access this file directly, but only through the
symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
replace this symlink by a static file or a different symlink.

See man:systemd-resolved.service(8) for details about the supported modes of

nameserver 8.8.8.8
nameserver 130.236.1.9
nameserver 130.236.1.10
nameserver 130.236.1.11

search test
nameserver 192.168.49.2 
timeout 5

(base) shreyas@ubuntu-machine:/etc$ sudo systemctl stop systemd-resolved
(base) shreyas@ubuntu-machine:/etc$ sudo rm resolv.conf
(base) shreyas@ubuntu-machine:/etc$ sudo nano resolv.conf
(base) shreyas@ubuntu-machine:/etc$ sudo rm /run/systemd/resolve/resolv.conf
(base) shreyas@ubuntu-machine:/etc$ sudo ln -s /etc/resolv.conf /run/systemd/resolve/resolv.conf
(base) shreyas@ubuntu-machine:/etc$ sudo systemctl restart systemd-resolved
(base) shreyas@ubuntu-machine:/etc$ ping www.google.com
PING www.google.com (142.250.74.100) 56(84) bytes of data.
64 bytes from arn11s10-in-f4.1e100.net (142.250.74.100): icmp_seq=1 ttl=53 time=7.21 ms
64 bytes from arn11s10-in-f4.1e100.net (142.250.74.100): icmp_seq=2 ttl=53 time=7.85 ms
64 bytes from arn11s10-in-f4.1e100.net (142.250.74.100): icmp_seq=3 ttl=53 time=7.76 ms

(base) shreyas@ubuntu-machine:/etc$ cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known uplink DNS servers. This file lists all configured search domains.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 130.236.1.9
nameserver 130.236.1.10
nameserver 130.236.1.11

(base) shreyas@ubuntu-machine:/etc$  minikube addons enable ingress
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v1.0.0-beta.3
    ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
    ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.0
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled

(base) shreyas@ubuntu-machine:/etc$ minikube addons enable ingress-dns
    ▪ Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.1
🌟  The 'ingress-dns' addon is enabled

(base) shreyas@ubuntu-machine:/etc$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/ingress-dns/example/example.yaml
deployment.apps/hello-world-app created
ingress.networking.k8s.io/example-ingress created
service/hello-world-app created
service/hello-world-app created

(base) shreyas@ubuntu-machine:/etc$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
hello-world-app-7b9bf45d65-9hlxg   1/1     Running   0          18s

(base) shreyas@ubuntu-machine:/etc$ ping hello-jane.test
ping: hello-jane.test: Name or service not known

(base) shreyas@ubuntu-machine:/etc$ nslookup hello-jane.test
Server:     130.236.1.9
Address:    130.236.1.9#53

** server can't find hello-jane.test: NXDOMAIN
reinvantveer commented 2 years ago

Hello, I am facing the same issue as @reinvantveer while starting the ingress DNS. Is there anything wrong with the Minikube version or am I doing wrong? Seems like the /etc/resolv.conf is not updating after overwriting it.

Yeah I discussed this with a few colleagues the other day and we agreed that messing with /etc/resolv.conf is not the way forward here. There has to be a better way. So I would advise against using resolv.conf as a method. Configuring minikube access should not have to include root-required operations on system resources that are supposed to be managed by other services.

reinvantveer commented 2 years ago

@shreyasshivakumara If you like, you can reopen the issue. I feel there should be something available that does not require root rights to and meddling with /etc/resolv.conf to make this work...

Nieto2018 commented 2 years ago

Sure - this worked for me:

ubuntu@deploy:~$ minikube start
😄  minikube v1.17.1 on Ubuntu 20.04 (amd64)
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
ubuntu@deploy:~$ cd /etc/
ubuntu@deploy:/etc$ head resolv.conf 
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
ubuntu@deploy:/etc$ sudo systemctl stop systemd-resolved
ubuntu@deploy:/etc$ sudo rm resolv.conf
sudo: unable to resolve host deploy: Temporary failure in name resolution
ubuntu@deploy:/etc$ minikube ip
192.168.49.2
ubuntu@deploy:/etc$ sudo vi resolv.conf
sudo: unable to resolve host deploy: Temporary failure in name resolution
ubuntu@deploy:/etc$ cat resolv.conf 
nameserver 127.0.0.53
options edns0 trust-ad
search openstacklocal

search test
nameserver 192.168.49.2 
timeout 5

ubuntu@deploy:/etc$ sudo rm /run/systemd/resolve/resolv.conf
sudo: unable to resolve host deploy: Temporary failure in name resolution
ubuntu@deploy:/etc$ sudo ln -s /etc/resolv.conf /run/systemd/resolve/resolv.conf
sudo: unable to resolve host deploy: Temporary failure in name resolution
ubuntu@deploy:/etc$ sudo systemctl restart systemd-resolved
sudo: unable to resolve host deploy: Temporary failure in name resolution
ubuntu@deploy:/etc$ ping www.google.com
PING www.google.com (172.217.168.36) 56(84) bytes of data.
64 bytes from zrh04s14-in-f4.1e100.net (172.217.168.36): icmp_seq=1 ttl=112 time=3.93 ms
C64 bytes from zrh04s14-in-f4.1e100.net (172.217.168.36): icmp_seq=2 ttl=112 time=3.98 ms
64 bytes from zrh04s14-in-f4.1e100.net (172.217.168.36): icmp_seq=3 ttl=112 time=3.90 ms
^C
--- www.google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 3.899/3.935/3.978/0.032 ms
ubuntu@deploy:/etc$ minikube addons enable ingress
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled
ubuntu@deploy:/etc$ minikube addons enable ingress-dns
🌟  The 'ingress-dns' addon is enabled
ubuntu@deploy:/etc$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/minikube/master/deploy/addons/ingress-dns/example/example.yaml
deployment.apps/hello-world-app created
ingress.networking.k8s.io/example-ingress created
service/hello-world-app created
service/hello-world-app created
ubuntu@deploy:/etc$ ping hello-jane.test
PING hello-jane.test (192.168.49.2) 56(84) bytes of data.
64 bytes from 192.168.49.2 (192.168.49.2): icmp_seq=1 ttl=64 time=0.068 ms
64 bytes from 192.168.49.2 (192.168.49.2): icmp_seq=2 ttl=64 time=0.078 ms
^C
--- hello-jane.test ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.068/0.073/0.078/0.005 ms
ubuntu@deploy:/etc$ ^C
ubuntu@deploy:/etc$ ^C
ubuntu@deploy:/etc$ curl hello-jane.test
Hello, world!
Version: 1.0.0
Hostname: hello-world-app-7b9bf45d65-zfcs4
ubuntu@deploy:/etc$ 

Note that some of the content of resolv.conf above is picked up from our openstack cluster (the domain openstacklocal and the trust-ad stuff - you should basically copy what is there from before and add the clause around the new domain with the minikube ip acting as the nameserver.

Of course this would only work if you were using systemd-resolved for name resolution in the first instance.

Hope that helps!

Hi @seanrmurphy,

Thanks, this solution works temporarily for me, but If I reboot my computer, or the services NetWorkManage/systemd-resolved or change my wifi connection to a different network, the file /run/systemd/resolve/resolv.conf is overwritten and It doesn't work. I tried different solutions others don't work or work temporarily.

Finally, I found a definitive solution that works with Ubuntu 22.04 and continues working after rebooting too. The documentation for ingress-dns, the "Linux OS with Network Manager" section seems to need a first step, this step is to check the /etc/resolv.conf is a symbolic link to the /run/NetworkManager/resolv.conf file (In ubuntu 22.04, it is a symbolic link to /run/systemd/resolve/stub-resolv.conf file as default), if it isn't, you should change it.

Hope that helps!

(Sorry my English)

rodriguezc commented 2 years ago

Had the same issue with NetworkManager and I finally solved it by following the official ingress dns doc BUT had to delete the resolv.conf file before restarting the network service:

sudo mv /etc/resolv.conf /etc/resolv.conf.old
systemctl restart NetworkManager.service
bitemarcz commented 4 months ago

ems to need a fir

Honestly, I'm really confused, but somehow this worked for me. My confusion right now is that my resolv.conf was sym linked to /etc/resolvconf/resolv.conf not to the systemd stub file or resolv.conf file in /run/systemd.

I was still able to replicate what you provided only using the different path for the symlinks. Wondering why the documentation is different or what's the delta between this and what the official docs have.

I still haven't rebooted my workstation so I don't know exactly if this will persist. Will find out down the road.

I guess my only question, what's the point of configuring the base file if you can bypass that and add the nameserver minikube ip to the resolv.conf file?

It seems to me something is broken where any updates are not picked up from the base file properly or everything isn't mapped correctly by default and it requires manual intervention to fix the mappings.

Any advice?