kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.28k stars 4.87k forks source link

Minikube start hangs for roughly 30s before printing `storage-provisioner` errors. #9371

Closed matthewmichihara closed 4 years ago

matthewmichihara commented 4 years ago
$ minikube version
minikube version: v1.13.1
commit: 1fd1f67f338cbab4b3e5a6e4c71c551f522ca138
$ minikube start --wait true --interactive false --delete-on-failure
😄  minikube v1.13.1 on Darwin 10.15.7
🆕  Kubernetes 1.19.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.19.2
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🔎  Verifying gcp-auth addon...
📌  Your GCP credentials will now be mounted into every pod created in the minikube cluster.
📌  If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
❗  Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
serviceaccount/storage-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
endpoints/k8s.io-minikube-hostpath unchanged

stderr:
The Pod "storage-provisioner" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
  core.PodSpec{
    Volumes: []core.Volume{
        {Name: "tmp", VolumeSource: core.VolumeSource{HostPath: &core.HostPathVolumeSource{Path: "/tmp", Type: &"Directory"}}},
        {Name: "storage-provisioner-token-942mx", VolumeSource: core.VolumeSource{Secret: &core.SecretVolumeSource{SecretName: "storage-provisioner-token-942mx", DefaultMode: &420}}},
-       {
-           Name: "gcp-creds",
-           VolumeSource: core.VolumeSource{
-               HostPath: &core.HostPathVolumeSource{Path: "/var/lib/minikube/google_application_credentials.json", Type: &"File"},
-           },
-       },
    },
    InitContainers: nil,
    Containers: []core.Container{
        {
            ... // 5 identical fields
            Ports:   nil,
            EnvFrom: nil,
-           Env: []core.EnvVar{
-               {Name: "GOOGLE_APPLICATION_CREDENTIALS", Value: "/google-app-creds.json"},
-               {Name: "PROJECT_ID", Value: "chelseamarket"},
-               {Name: "GCP_PROJECT", Value: "chelseamarket"},
-               {Name: "GCLOUD_PROJECT", Value: "chelseamarket"},
-               {Name: "GOOGLE_CLOUD_PROJECT", Value: "chelseamarket"},
-               {Name: "CLOUDSDK_CORE_PROJECT", Value: "chelseamarket"},
-           },
+           Env:       nil,
            Resources: core.ResourceRequirements{},
            VolumeMounts: []core.VolumeMount{
                {Name: "tmp", MountPath: "/tmp"},
                {Name: "storage-provisioner-token-942mx", ReadOnly: true, MountPath: "/var/run/secrets/kubernetes.io/serviceaccount"},
-               {Name: "gcp-creds", ReadOnly: true, MountPath: "/google-app-creds.json"},
            },
            VolumeDevices: nil,
            LivenessProbe: nil,
            ... // 10 identical fields
        },
    },
    EphemeralContainers: nil,
    RestartPolicy:       "Always",
    ... // 24 identical fields
  }

]
🌟  Enabled addons: default-storageclass, gcp-auth, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube" by default

After the following line is printed, minikube hangs for roughly 30s: 📌 If you don't want your credentials mounted into a specific pod, add a label with thegcp-auth-skip-secretkey to your pod configuration.

Optional: Full output of minikube logs command:

``` ==> Docker <== -- Logs begin at Thu 2020-10-01 20:57:43 UTC, end at Thu 2020-10-01 21:06:15 UTC. -- Oct 01 20:57:43 minikube systemd[1]: Starting Docker Application Container Engine... Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.366174113Z" level=info msg="Starting up" Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.368474530Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.368651762Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.368757853Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.368772896Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.370543351Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.370683381Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.370856994Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.370919982Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.394782390Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.413333714Z" level=info msg="Loading containers: start." Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.587633477Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.636199356Z" level=info msg="Loading containers: done." Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.654202564Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.654327544Z" level=info msg="Daemon has completed initialization" Oct 01 20:57:43 minikube systemd[1]: Started Docker Application Container Engine. Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.676577273Z" level=info msg="API listen on /var/run/docker.sock" Oct 01 20:57:43 minikube dockerd[155]: time="2020-10-01T20:57:43.676668796Z" level=info msg="API listen on [::]:2376" Oct 01 20:58:05 minikube dockerd[155]: time="2020-10-01T20:58:05.274681919Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 0f3366b353ae4 bad58561c4be7 7 minutes ago Running storage-provisioner 27 e9407cc75bf5a a38ceaaf90a16 ce2bab508e245 8 minutes ago Running gcp-auth 3 9c8e9b59ee3f6 0ef1f1d9928d9 67da37a9a360e 8 minutes ago Running coredns 7 6fa746ac5f256 6601dda05bde9 3439b7546f29b 8 minutes ago Running kube-proxy 7 a7303b4ec211f ecfa591e4c922 bad58561c4be7 8 minutes ago Exited storage-provisioner 26 e9407cc75bf5a 322eb567c3872 da26705ccb4b5 8 minutes ago Running kube-controller-manager 7 d850dc369e94b 3abba99b1de3e 7e28efa976bd1 8 minutes ago Running kube-apiserver 6 b500f0ec872aa fe29c847d6095 76216c34ed0c7 8 minutes ago Running kube-scheduler 7 32933dfef7a96 e927bd2ded264 303ce5db0e90d 8 minutes ago Running etcd 3 2e779ed61e38a e97de61bb142f ce2bab508e245 17 minutes ago Exited gcp-auth 2 0bb41874da0bc 9ee1b13ccc88b 67da37a9a360e 17 minutes ago Exited coredns 6 740ce444b348f 0fcaf443cdd29 3439b7546f29b 17 minutes ago Exited kube-proxy 6 4f518eb0811d7 b8269bd6e5fb1 303ce5db0e90d 17 minutes ago Exited etcd 2 b81c2ab6ea9e2 2d88b4f294969 7e28efa976bd1 17 minutes ago Exited kube-apiserver 5 cec5f08acc1bd fbd2b91925c19 da26705ccb4b5 17 minutes ago Exited kube-controller-manager 6 f2235b4fa775f 7abefa11f8c61 76216c34ed0c7 17 minutes ago Exited kube-scheduler 6 966ff59276d33 2d5f3abce53c4 4d4f44df9f905 59 minutes ago Exited patch 1 92d7d568ea078 e125111b0c5c1 4d4f44df9f905 59 minutes ago Exited create 0 a9061ccacd9e3 ==> coredns [0ef1f1d9928d] <== E1001 20:58:06.396699 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: x509: certificate signed by unknown authority E1001 20:58:06.398075 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: x509: certificate signed by unknown authority E1001 20:58:06.399262 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: x509: certificate signed by unknown authority .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b ==> coredns [9ee1b13ccc88] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s E1001 20:49:07.113324 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: x509: certificate signed by unknown authority E1001 20:49:07.114208 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: x509: certificate signed by unknown authority E1001 20:49:07.114364 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: x509: certificate signed by unknown authority ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=2243b4b97c131e3244c5f014faedca0d846599f5 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_09_10T10_19_31_0700 minikube.k8s.io/version=v1.12.3 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 10 Sep 2020 17:19:28 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Thu, 01 Oct 2020 21:06:12 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Thu, 01 Oct 2020 21:03:03 +0000 Thu, 10 Sep 2020 17:19:24 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 01 Oct 2020 21:03:03 +0000 Thu, 10 Sep 2020 17:19:24 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 01 Oct 2020 21:03:03 +0000 Thu, 10 Sep 2020 17:19:24 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 01 Oct 2020 21:03:03 +0000 Thu, 10 Sep 2020 17:19:42 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 172.17.0.2 Hostname: minikube Capacity: cpu: 4 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4035056Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4035056Ki pods: 110 System Info: Machine ID: 36fc85453a074b889176fd3188b2ed16 System UUID: 8d6989cf-686b-4354-b6c0-bcee341aa1a3 Boot ID: 73afbb7c-3ae5-4064-8643-a836ed0652af Kernel Version: 4.19.76-linuxkit OS Image: Ubuntu 20.04 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.8 Kubelet Version: v1.18.3 Kube-Proxy Version: v1.18.3 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- gcp-auth gcp-auth-6df46599c7-w87sn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59m kube-system coredns-66bff467f8-zd2cp 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 21d kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 60m kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 14d kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 21d kube-system kube-proxy-vdfcz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21d kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 21d kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21d Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (16%) 0 (0%) memory 70Mi (1%) 170Mi (4%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 9d (x8 over 9d) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 9d (x8 over 9d) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 9d (x7 over 9d) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 9d kubelet, minikube Updated Node Allocatable limit across pods Normal Starting 9d kubelet, minikube Starting kubelet. Warning readOnlySysFS 9d kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) Normal Starting 9d kube-proxy, minikube Starting kube-proxy. Normal Starting 9d kubelet, minikube Starting kubelet. Normal NodeAllocatableEnforced 9d kubelet, minikube Updated Node Allocatable limit across pods Normal NodeHasNoDiskPressure 9d (x8 over 9d) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 9d (x8 over 9d) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 9d (x7 over 9d) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Warning readOnlySysFS 9d kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) Normal Starting 9d kube-proxy, minikube Starting kube-proxy. Normal NodeHasSufficientMemory 61m (x8 over 61m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal Starting 61m kubelet, minikube Starting kubelet. Normal NodeHasNoDiskPressure 61m (x8 over 61m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 61m (x7 over 61m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 61m kubelet, minikube Updated Node Allocatable limit across pods Warning readOnlySysFS 61m kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) Normal Starting 61m kube-proxy, minikube Starting kube-proxy. Normal NodeAllocatableEnforced 58m kubelet, minikube Updated Node Allocatable limit across pods Normal Starting 58m kubelet, minikube Starting kubelet. Normal NodeHasNoDiskPressure 58m (x8 over 58m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 58m (x7 over 58m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 58m (x8 over 58m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Warning readOnlySysFS 58m kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) Normal Starting 58m kube-proxy, minikube Starting kube-proxy. Normal Starting 17m kubelet, minikube Starting kubelet. Normal NodeHasSufficientMemory 17m (x8 over 17m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 17m (x8 over 17m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 17m (x7 over 17m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 17m kubelet, minikube Updated Node Allocatable limit across pods Warning readOnlySysFS 17m kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) Normal Starting 17m kube-proxy, minikube Starting kube-proxy. Normal Starting 8m19s kubelet, minikube Starting kubelet. Normal NodeHasNoDiskPressure 8m19s (x8 over 8m19s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m19s (x7 over 8m19s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 8m19s kubelet, minikube Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 8m19s (x8 over 8m19s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal Starting 8m10s kube-proxy, minikube Starting kube-proxy. Warning readOnlySysFS 8m10s kube-proxy, minikube CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) ==> dmesg <== [Oct 1 19:57] virtio-pci 0000:00:01.0: can't derive routing for PCI INT A [ +0.000759] virtio-pci 0000:00:01.0: PCI INT A: no GSI [ +0.001603] virtio-pci 0000:00:02.0: can't derive routing for PCI INT A [ +0.000737] virtio-pci 0000:00:02.0: PCI INT A: no GSI [ +0.002941] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A [ +0.000851] virtio-pci 0000:00:07.0: PCI INT A: no GSI [ +0.051025] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds). [ +0.599178] i8042: Can't read CTR while initializing i8042 [ +0.000704] i8042: probe of i8042 failed with error -5 [ +0.006694] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184) [ +0.001545] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620) [ +0.167562] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ +0.021081] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ +3.233838] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ +0.079124] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [Oct 1 20:09] hrtimer: interrupt took 1748013 ns ==> etcd [b8269bd6e5fb] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-10-01 20:49:00.487602 I | etcdmain: etcd Version: 3.4.3 2020-10-01 20:49:00.487640 I | etcdmain: Git SHA: 3cf2f69b5 2020-10-01 20:49:00.487643 I | etcdmain: Go Version: go1.12.12 2020-10-01 20:49:00.487645 I | etcdmain: Go OS/Arch: linux/amd64 2020-10-01 20:49:00.487647 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4 2020-10-01 20:49:00.487688 N | etcdmain: the server is already initialized as member before, starting as etcd member... [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-10-01 20:49:00.487731 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-10-01 20:49:00.489692 I | embed: name = minikube 2020-10-01 20:49:00.489719 I | embed: data dir = /var/lib/minikube/etcd 2020-10-01 20:49:00.489724 I | embed: member dir = /var/lib/minikube/etcd/member 2020-10-01 20:49:00.489726 I | embed: heartbeat = 100ms 2020-10-01 20:49:00.489728 I | embed: election = 1000ms 2020-10-01 20:49:00.489730 I | embed: snapshot count = 10000 2020-10-01 20:49:00.489738 I | embed: advertise client URLs = https://172.17.0.2:2379 2020-10-01 20:49:00.489741 I | embed: initial advertise peer URLs = https://172.17.0.2:2380 2020-10-01 20:49:00.489744 I | embed: initial cluster = 2020-10-01 20:49:00.494893 I | etcdserver: recovered store from snapshot at index 110011 2020-10-01 20:49:00.550343 I | mvcc: restore compact to 75325 2020-10-01 20:49:00.961895 I | etcdserver: restarting member b273bc7741bcb020 in cluster 86482fea2286a1d2 at commit index 114123 raft2020/10/01 20:49:00 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056) raft2020/10/01 20:49:00 INFO: b273bc7741bcb020 became follower at term 7 raft2020/10/01 20:49:00 INFO: newRaft b273bc7741bcb020 [peers: [b273bc7741bcb020], term: 7, commit: 114123, applied: 110011, lastindex: 114123, lastterm: 7] 2020-10-01 20:49:00.963413 I | etcdserver/api: enabled capabilities for version 3.4 2020-10-01 20:49:00.963474 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2 from store 2020-10-01 20:49:00.963607 I | etcdserver/membership: set the cluster version to 3.4 from store 2020-10-01 20:49:00.965218 I | mvcc: restore compact to 75325 2020-10-01 20:49:00.977369 W | auth: simple token is not cryptographically signed 2020-10-01 20:49:00.978554 I | etcdserver: starting server... [version: 3.4.3, cluster version: 3.4] 2020-10-01 20:49:00.978817 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10) 2020-10-01 20:49:00.988607 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-10-01 20:49:00.988921 I | embed: listening for metrics on http://127.0.0.1:2381 2020-10-01 20:49:00.989111 I | embed: listening for peers on 172.17.0.2:2380 raft2020/10/01 20:49:01 INFO: b273bc7741bcb020 is starting a new election at term 7 raft2020/10/01 20:49:01 INFO: b273bc7741bcb020 became candidate at term 8 raft2020/10/01 20:49:01 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 8 raft2020/10/01 20:49:01 INFO: b273bc7741bcb020 became leader at term 8 raft2020/10/01 20:49:01 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 8 2020-10-01 20:49:01.670891 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 86482fea2286a1d2 2020-10-01 20:49:01.671315 I | embed: ready to serve client requests 2020-10-01 20:49:01.671558 I | embed: ready to serve client requests 2020-10-01 20:49:01.748622 I | embed: serving client requests on 127.0.0.1:2379 2020-10-01 20:49:01.750856 I | embed: serving client requests on 172.17.0.2:2379 2020-10-01 20:55:44.660658 N | pkg/osutil: received terminated signal, shutting down... WARNING: 2020/10/01 20:55:44 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... 2020-10-01 20:55:44.680158 I | etcdserver: skipped leadership transfer for single voting member cluster WARNING: 2020/10/01 20:55:44 grpc: addrConn.createTransport failed to connect to {172.17.0.2:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 172.17.0.2:2379: operation was canceled". Reconnecting... ==> etcd [e927bd2ded26] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-10-01 20:57:58.898402 I | etcdmain: etcd Version: 3.4.3 2020-10-01 20:57:58.898435 I | etcdmain: Git SHA: 3cf2f69b5 2020-10-01 20:57:58.898438 I | etcdmain: Go Version: go1.12.12 2020-10-01 20:57:58.898440 I | etcdmain: Go OS/Arch: linux/amd64 2020-10-01 20:57:58.898443 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4 2020-10-01 20:57:58.898488 N | etcdmain: the server is already initialized as member before, starting as etcd member... [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-10-01 20:57:58.898507 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-10-01 20:57:58.899034 I | embed: name = minikube 2020-10-01 20:57:58.899042 I | embed: data dir = /var/lib/minikube/etcd 2020-10-01 20:57:58.899046 I | embed: member dir = /var/lib/minikube/etcd/member 2020-10-01 20:57:58.899049 I | embed: heartbeat = 100ms 2020-10-01 20:57:58.899052 I | embed: election = 1000ms 2020-10-01 20:57:58.899054 I | embed: snapshot count = 10000 2020-10-01 20:57:58.899101 I | embed: advertise client URLs = https://172.17.0.2:2379 2020-10-01 20:57:58.899112 I | embed: initial advertise peer URLs = https://172.17.0.2:2380 2020-10-01 20:57:58.899122 I | embed: initial cluster = 2020-10-01 20:57:58.908018 I | etcdserver: recovered store from snapshot at index 110011 2020-10-01 20:57:58.908754 I | mvcc: restore compact to 75325 2020-10-01 20:57:59.268035 I | etcdserver: restarting member b273bc7741bcb020 in cluster 86482fea2286a1d2 at commit index 114664 raft2020/10/01 20:57:59 INFO: b273bc7741bcb020 switched to configuration voters=(12858828581462913056) raft2020/10/01 20:57:59 INFO: b273bc7741bcb020 became follower at term 8 raft2020/10/01 20:57:59 INFO: newRaft b273bc7741bcb020 [peers: [b273bc7741bcb020], term: 8, commit: 114664, applied: 110011, lastindex: 114664, lastterm: 8] 2020-10-01 20:57:59.268312 I | etcdserver/api: enabled capabilities for version 3.4 2020-10-01 20:57:59.268320 I | etcdserver/membership: added member b273bc7741bcb020 [https://172.17.0.3:2380] to cluster 86482fea2286a1d2 from store 2020-10-01 20:57:59.268324 I | etcdserver/membership: set the cluster version to 3.4 from store 2020-10-01 20:57:59.273415 I | mvcc: restore compact to 75325 2020-10-01 20:57:59.285946 W | auth: simple token is not cryptographically signed 2020-10-01 20:57:59.288300 I | etcdserver: starting server... [version: 3.4.3, cluster version: 3.4] 2020-10-01 20:57:59.289860 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-10-01 20:57:59.299831 I | etcdserver: b273bc7741bcb020 as single-node; fast-forwarding 9 ticks (election ticks 10) 2020-10-01 20:57:59.302675 I | embed: listening for peers on 172.17.0.2:2380 2020-10-01 20:57:59.302795 I | embed: listening for metrics on http://127.0.0.1:2381 raft2020/10/01 20:57:59 INFO: b273bc7741bcb020 is starting a new election at term 8 raft2020/10/01 20:57:59 INFO: b273bc7741bcb020 became candidate at term 9 raft2020/10/01 20:57:59 INFO: b273bc7741bcb020 received MsgVoteResp from b273bc7741bcb020 at term 9 raft2020/10/01 20:57:59 INFO: b273bc7741bcb020 became leader at term 9 raft2020/10/01 20:57:59 INFO: raft.node: b273bc7741bcb020 elected leader b273bc7741bcb020 at term 9 2020-10-01 20:57:59.670274 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 86482fea2286a1d2 2020-10-01 20:57:59.670507 I | embed: ready to serve client requests 2020-10-01 20:57:59.671527 I | embed: ready to serve client requests 2020-10-01 20:57:59.672362 I | embed: serving client requests on 172.17.0.2:2379 2020-10-01 20:57:59.673180 I | embed: serving client requests on 127.0.0.1:2379 ==> kernel <== 21:06:20 up 1:09, 0 users, load average: 0.61, 0.43, 0.68 Linux minikube 4.19.76-linuxkit #1 SMP Tue May 26 11:42:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04 LTS" ==> kube-apiserver [2d88b4f29496] <== W1001 20:55:52.627653 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:52.697815 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:52.889969 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:52.987814 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.015591 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.064529 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.067475 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.084503 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.128320 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.167414 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.316619 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.396826 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.404890 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.448230 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.455455 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.492563 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.502369 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.507490 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.538926 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.558754 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.585461 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.661146 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.664669 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.686631 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.717346 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.718267 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.729393 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.750085 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.765139 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.786654 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.821666 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.836312 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.864892 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.886779 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.897587 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.940880 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:53.952148 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.007350 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.008531 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.015480 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.048378 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.049672 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.120520 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.149116 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.155540 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.168932 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.223974 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.272335 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.347356 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.368471 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.372913 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.404899 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.422219 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.431307 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.465763 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.528619 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.553896 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.599508 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.663953 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1001 20:55:54.692957 1 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... ==> kube-apiserver [3abba99b1de3] <== W1001 20:58:00.833121 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W1001 20:58:00.836088 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W1001 20:58:00.848358 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W1001 20:58:00.863964 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. W1001 20:58:00.863999 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. I1001 20:58:00.872346 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1001 20:58:00.872408 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I1001 20:58:00.873791 1 client.go:361] parsed scheme: "endpoint" I1001 20:58:00.873831 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I1001 20:58:00.882276 1 client.go:361] parsed scheme: "endpoint" I1001 20:58:00.882368 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I1001 20:58:02.546359 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I1001 20:58:02.546367 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I1001 20:58:02.546706 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I1001 20:58:02.547060 1 secure_serving.go:178] Serving securely on [::]:8443 I1001 20:58:02.547112 1 available_controller.go:387] Starting AvailableConditionController I1001 20:58:02.547119 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I1001 20:58:02.547131 1 tlsconfig.go:240] Starting DynamicServingCertificateController I1001 20:58:02.548776 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I1001 20:58:02.548848 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller I1001 20:58:02.548991 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I1001 20:58:02.549036 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I1001 20:58:02.549072 1 autoregister_controller.go:141] Starting autoregister controller I1001 20:58:02.549092 1 cache.go:32] Waiting for caches to sync for autoregister controller I1001 20:58:02.549216 1 crd_finalizer.go:266] Starting CRDFinalizer I1001 20:58:02.550278 1 controller.go:81] Starting OpenAPI AggregationController I1001 20:58:02.551002 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I1001 20:58:02.551086 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I1001 20:58:02.551434 1 crdregistration_controller.go:111] Starting crd-autoregister controller I1001 20:58:02.551549 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister I1001 20:58:02.551628 1 controller.go:86] Starting OpenAPI controller I1001 20:58:02.551686 1 customresource_discovery_controller.go:209] Starting DiscoveryController I1001 20:58:02.551766 1 naming_controller.go:291] Starting NamingConditionController I1001 20:58:02.551856 1 establishing_controller.go:76] Starting EstablishingController I1001 20:58:02.551964 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController I1001 20:58:02.552028 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I1001 20:58:02.668904 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller I1001 20:58:02.669201 1 shared_informer.go:230] Caches are synced for crd-autoregister I1001 20:58:02.674784 1 cache.go:39] Caches are synced for autoregister controller I1001 20:58:02.689105 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io E1001 20:58:02.701088 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service I1001 20:58:02.747542 1 cache.go:39] Caches are synced for AvailableConditionController controller I1001 20:58:02.752734 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1001 20:58:03.546721 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1001 20:58:03.546924 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I1001 20:58:03.554743 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I1001 20:58:04.155645 1 controller.go:606] quota admission added evaluator for: serviceaccounts I1001 20:58:04.177816 1 controller.go:606] quota admission added evaluator for: deployments.apps I1001 20:58:04.265856 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I1001 20:58:04.280223 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I1001 20:58:04.289385 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I1001 20:58:08.982932 1 controller.go:606] quota admission added evaluator for: endpoints I1001 20:58:08.996236 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io I1001 20:58:15.913679 1 trace.go:116] Trace[1349799274]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:UPDATE,UID:735e1b9e-f98a-44f1-ace9-408a2c4ffa5d (started: 2020-10-01 20:58:05.913203205 +0000 UTC m=+6.918044868) (total time: 10.00043825s): Trace[1349799274]: [10.00043825s] [10.00043825s] END W1001 20:58:15.913869 1 dispatcher.go:181] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s: dial tcp 10.110.26.106:443: i/o timeout I1001 20:58:15.934182 1 trace.go:116] Trace[1043273956]: "GuaranteedUpdate etcd3" type:*core.Pod (started: 2020-10-01 20:58:05.911104789 +0000 UTC m=+6.915946451) (total time: 10.023064399s): Trace[1043273956]: [10.023064399s] [10.022975912s] END I1001 20:58:15.934732 1 trace.go:116] Trace[944854656]: "Patch" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubectl/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-10-01 20:58:05.910973054 +0000 UTC m=+6.915814691) (total time: 10.023474198s): Trace[944854656]: [10.004072222s] [10.002108343s] About to apply patch ==> kube-controller-manager [322eb567c387] <== I1001 20:58:07.886457 1 shared_informer.go:223] Waiting for caches to sync for taint W1001 20:58:07.886599 1 controllermanager.go:525] Skipping "route" I1001 20:58:08.035972 1 controllermanager.go:533] Started "persistentvolume-binder" I1001 20:58:08.036095 1 pv_controller_base.go:295] Starting persistent volume controller I1001 20:58:08.036110 1 shared_informer.go:223] Waiting for caches to sync for persistent volume I1001 20:58:08.185429 1 controllermanager.go:533] Started "replicationcontroller" I1001 20:58:08.185531 1 replica_set.go:181] Starting replicationcontroller controller I1001 20:58:08.185538 1 shared_informer.go:223] Waiting for caches to sync for ReplicationController I1001 20:58:08.335816 1 controllermanager.go:533] Started "job" I1001 20:58:08.335871 1 job_controller.go:144] Starting job controller I1001 20:58:08.335878 1 shared_informer.go:223] Waiting for caches to sync for job I1001 20:58:08.489339 1 controllermanager.go:533] Started "ttl" I1001 20:58:08.489423 1 ttl_controller.go:118] Starting TTL controller I1001 20:58:08.489430 1 shared_informer.go:223] Waiting for caches to sync for TTL I1001 20:58:08.636072 1 controllermanager.go:533] Started "bootstrapsigner" I1001 20:58:08.636203 1 shared_informer.go:223] Waiting for caches to sync for bootstrap_signer I1001 20:58:08.785654 1 node_lifecycle_controller.go:78] Sending events to api server E1001 20:58:08.785824 1 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided W1001 20:58:08.785865 1 controllermanager.go:525] Skipping "cloud-node-lifecycle" I1001 20:58:08.937275 1 controllermanager.go:533] Started "pvc-protection" I1001 20:58:08.937317 1 pvc_protection_controller.go:101] Starting PVC protection controller I1001 20:58:08.937333 1 shared_informer.go:223] Waiting for caches to sync for PVC protection I1001 20:58:08.938221 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I1001 20:58:08.942868 1 shared_informer.go:223] Waiting for caches to sync for resource quota W1001 20:58:08.950758 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1001 20:58:08.955148 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I1001 20:58:08.962154 1 shared_informer.go:230] Caches are synced for PV protection I1001 20:58:08.974583 1 shared_informer.go:230] Caches are synced for deployment I1001 20:58:08.980733 1 shared_informer.go:230] Caches are synced for endpoint I1001 20:58:08.985654 1 shared_informer.go:230] Caches are synced for GC I1001 20:58:08.985783 1 shared_informer.go:230] Caches are synced for ReplicaSet I1001 20:58:08.989011 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I1001 20:58:08.989514 1 shared_informer.go:230] Caches are synced for TTL I1001 20:58:08.994348 1 shared_informer.go:230] Caches are synced for endpoint_slice I1001 20:58:09.000986 1 shared_informer.go:230] Caches are synced for service account I1001 20:58:09.006856 1 shared_informer.go:230] Caches are synced for HPA I1001 20:58:09.008253 1 shared_informer.go:230] Caches are synced for namespace I1001 20:58:09.012176 1 shared_informer.go:230] Caches are synced for expand I1001 20:58:09.035588 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I1001 20:58:09.036022 1 shared_informer.go:230] Caches are synced for job I1001 20:58:09.036376 1 shared_informer.go:230] Caches are synced for persistent volume I1001 20:58:09.037549 1 shared_informer.go:230] Caches are synced for PVC protection I1001 20:58:09.136397 1 shared_informer.go:230] Caches are synced for bootstrap_signer I1001 20:58:09.164978 1 shared_informer.go:230] Caches are synced for daemon sets I1001 20:58:09.173209 1 shared_informer.go:230] Caches are synced for disruption I1001 20:58:09.173317 1 disruption.go:339] Sending events to api server. I1001 20:58:09.185782 1 shared_informer.go:230] Caches are synced for ReplicationController I1001 20:58:09.235841 1 shared_informer.go:230] Caches are synced for stateful set I1001 20:58:09.286563 1 shared_informer.go:230] Caches are synced for attach detach I1001 20:58:09.286825 1 shared_informer.go:230] Caches are synced for taint I1001 20:58:09.287020 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: I1001 20:58:09.287081 1 taint_manager.go:187] Starting NoExecuteTaintManager W1001 20:58:09.287165 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. I1001 20:58:09.287287 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. I1001 20:58:09.287407 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"b9ad5560-3135-4b85-80af-a3cc4ad4acbc", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I1001 20:58:09.543172 1 shared_informer.go:230] Caches are synced for resource quota I1001 20:58:09.591451 1 shared_informer.go:230] Caches are synced for resource quota I1001 20:58:09.634378 1 shared_informer.go:230] Caches are synced for garbage collector I1001 20:58:09.634445 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1001 20:58:09.638592 1 shared_informer.go:230] Caches are synced for garbage collector ==> kube-controller-manager [fbd2b91925c1] <== I1001 20:49:10.287501 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator I1001 20:49:10.287512 1 shared_informer.go:223] Waiting for caches to sync for ClusterRoleAggregator I1001 20:49:10.586639 1 controllermanager.go:533] Started "disruption" I1001 20:49:10.586731 1 disruption.go:331] Starting disruption controller I1001 20:49:10.586739 1 shared_informer.go:223] Waiting for caches to sync for disruption I1001 20:49:10.636025 1 request.go:621] Throttling request took 1.038801717s, request: GET:https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1beta1?timeout=32s I1001 20:49:10.737898 1 controllermanager.go:533] Started "csrsigning" I1001 20:49:10.737970 1 certificate_controller.go:119] Starting certificate controller "csrsigning" I1001 20:49:10.737978 1 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning I1001 20:49:10.737993 1 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key I1001 20:49:10.886660 1 node_lifecycle_controller.go:78] Sending events to api server E1001 20:49:10.886720 1 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided W1001 20:49:10.886727 1 controllermanager.go:525] Skipping "cloud-node-lifecycle" I1001 20:49:11.037433 1 controllermanager.go:533] Started "replicaset" I1001 20:49:11.037952 1 replica_set.go:181] Starting replicaset controller I1001 20:49:11.037977 1 shared_informer.go:223] Waiting for caches to sync for ReplicaSet I1001 20:49:11.070343 1 shared_informer.go:230] Caches are synced for endpoint I1001 20:49:11.085637 1 shared_informer.go:230] Caches are synced for job I1001 20:49:11.087771 1 shared_informer.go:230] Caches are synced for expand I1001 20:49:11.087971 1 shared_informer.go:230] Caches are synced for service account I1001 20:49:11.088233 1 shared_informer.go:230] Caches are synced for HPA I1001 20:49:11.089957 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I1001 20:49:11.098808 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I1001 20:49:11.109179 1 shared_informer.go:230] Caches are synced for PV protection I1001 20:49:11.138115 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I1001 20:49:11.144366 1 shared_informer.go:230] Caches are synced for endpoint_slice W1001 20:49:11.144571 1 endpointslice_controller.go:260] Error syncing endpoint slices for service "gcp-auth/gcp-auth", retrying. Error: node "minikube" not found W1001 20:49:11.144723 1 endpointslice_controller.go:260] Error syncing endpoint slices for service "kube-system/kube-dns", retrying. Error: node "minikube" not found I1001 20:49:11.144779 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"a25d4084-84a9-4e38-b438-b65e66c48096", APIVersion:"v1", ResourceVersion:"211", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: node "minikube" not found I1001 20:49:11.144853 1 event.go:278] Event(v1.ObjectReference{Kind:"Service", Namespace:"gcp-auth", Name:"gcp-auth", UID:"e8860f63-c486-4b16-84e9-cbfc364ad4d9", APIVersion:"v1", ResourceVersion:"73642", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service gcp-auth/gcp-auth: node "minikube" not found W1001 20:49:11.146336 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1001 20:49:11.148759 1 shared_informer.go:230] Caches are synced for namespace I1001 20:49:11.151758 1 shared_informer.go:230] Caches are synced for PVC protection I1001 20:49:11.187095 1 shared_informer.go:230] Caches are synced for TTL I1001 20:49:11.188206 1 shared_informer.go:230] Caches are synced for persistent volume I1001 20:49:11.218954 1 shared_informer.go:230] Caches are synced for GC I1001 20:49:11.237894 1 shared_informer.go:230] Caches are synced for attach detach I1001 20:49:11.350121 1 shared_informer.go:230] Caches are synced for ReplicationController I1001 20:49:11.387930 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I1001 20:49:11.438158 1 shared_informer.go:230] Caches are synced for ReplicaSet I1001 20:49:11.487085 1 shared_informer.go:230] Caches are synced for disruption I1001 20:49:11.487126 1 disruption.go:339] Sending events to api server. I1001 20:49:11.487085 1 shared_informer.go:230] Caches are synced for stateful set I1001 20:49:11.489200 1 shared_informer.go:230] Caches are synced for deployment I1001 20:49:11.502869 1 shared_informer.go:230] Caches are synced for bootstrap_signer I1001 20:49:11.614575 1 shared_informer.go:223] Waiting for caches to sync for resource quota I1001 20:49:11.649038 1 shared_informer.go:230] Caches are synced for taint I1001 20:49:11.649200 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: W1001 20:49:11.649521 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. I1001 20:49:11.649566 1 taint_manager.go:187] Starting NoExecuteTaintManager I1001 20:49:11.649704 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. I1001 20:49:11.650069 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"b9ad5560-3135-4b85-80af-a3cc4ad4acbc", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller I1001 20:49:11.667632 1 shared_informer.go:230] Caches are synced for resource quota I1001 20:49:11.674006 1 shared_informer.go:230] Caches are synced for daemon sets I1001 20:49:11.690262 1 shared_informer.go:230] Caches are synced for garbage collector I1001 20:49:11.695148 1 shared_informer.go:230] Caches are synced for garbage collector I1001 20:49:11.695223 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1001 20:49:11.714915 1 shared_informer.go:230] Caches are synced for resource quota I1001 20:51:25.348141 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"java-hello-world", UID:"fd56e86d-cc72-4a2d-ae21-0d7b6f4b2439", APIVersion:"apps/v1", ResourceVersion:"75773", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set java-hello-world-7f7f4b8ff9 to 1 I1001 20:51:25.358026 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"java-hello-world-7f7f4b8ff9", UID:"135262fd-87eb-49c4-9f77-abb9c782f832", APIVersion:"apps/v1", ResourceVersion:"75774", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: java-hello-world-7f7f4b8ff9-sfvsz ==> kube-proxy [0fcaf443cdd2] <== W1001 20:49:06.998413 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy I1001 20:49:07.045445 1 node.go:136] Successfully retrieved node IP: 172.17.0.2 I1001 20:49:07.045686 1 server_others.go:186] Using iptables Proxier. W1001 20:49:07.045706 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I1001 20:49:07.045712 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I1001 20:49:07.046520 1 server.go:583] Version: v1.18.3 I1001 20:49:07.046928 1 conntrack.go:52] Setting nf_conntrack_max to 131072 E1001 20:49:07.047634 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime]) I1001 20:49:07.047769 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1001 20:49:07.047881 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1001 20:49:07.048083 1 config.go:315] Starting service config controller I1001 20:49:07.048091 1 shared_informer.go:223] Waiting for caches to sync for service config I1001 20:49:07.048127 1 config.go:133] Starting endpoints config controller I1001 20:49:07.048138 1 shared_informer.go:223] Waiting for caches to sync for endpoints config I1001 20:49:07.148345 1 shared_informer.go:230] Caches are synced for service config I1001 20:49:07.148420 1 shared_informer.go:230] Caches are synced for endpoints config E1001 20:55:44.711604 1 reflector.go:382] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=75984&timeout=7m7s&timeoutSeconds=427&watch=true: dial tcp 172.17.0.2:8443: connect: connection refused ==> kube-proxy [6601dda05bde] <== W1001 20:58:06.203400 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy I1001 20:58:06.212173 1 node.go:136] Successfully retrieved node IP: 172.17.0.2 I1001 20:58:06.212341 1 server_others.go:186] Using iptables Proxier. W1001 20:58:06.212350 1 server_others.go:436] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I1001 20:58:06.212353 1 server_others.go:447] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I1001 20:58:06.213185 1 server.go:583] Version: v1.18.3 I1001 20:58:06.215815 1 conntrack.go:52] Setting nf_conntrack_max to 131072 E1001 20:58:06.216361 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime]) I1001 20:58:06.216603 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1001 20:58:06.216745 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1001 20:58:06.217216 1 config.go:133] Starting endpoints config controller I1001 20:58:06.217233 1 shared_informer.go:223] Waiting for caches to sync for endpoints config I1001 20:58:06.217280 1 config.go:315] Starting service config controller I1001 20:58:06.217297 1 shared_informer.go:223] Waiting for caches to sync for service config I1001 20:58:06.318140 1 shared_informer.go:230] Caches are synced for service config I1001 20:58:06.318214 1 shared_informer.go:230] Caches are synced for endpoints config ==> kube-scheduler [7abefa11f8c6] <== I1001 20:49:00.383854 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I1001 20:49:00.383900 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I1001 20:49:01.274281 1 serving.go:313] Generated self-signed cert in-memory I1001 20:49:04.895435 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I1001 20:49:04.895449 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W1001 20:49:04.952616 1 authorization.go:47] Authorization is disabled W1001 20:49:04.952684 1 authentication.go:40] Authentication is disabled I1001 20:49:04.952705 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I1001 20:49:04.954511 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I1001 20:49:04.954538 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I1001 20:49:04.954567 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1001 20:49:04.954572 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1001 20:49:04.955802 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I1001 20:49:04.955881 1 tlsconfig.go:240] Starting DynamicServingCertificateController I1001 20:49:05.055059 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1001 20:49:05.055783 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file ==> kube-scheduler [fe29c847d609] <== I1001 20:57:58.987195 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I1001 20:57:58.987268 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I1001 20:57:59.792406 1 serving.go:313] Generated self-signed cert in-memory W1001 20:58:02.604864 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1001 20:58:02.604934 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1001 20:58:02.604949 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. W1001 20:58:02.604956 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1001 20:58:02.677698 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I1001 20:58:02.677716 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W1001 20:58:02.683073 1 authorization.go:47] Authorization is disabled W1001 20:58:02.683085 1 authentication.go:40] Authentication is disabled I1001 20:58:02.683092 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I1001 20:58:02.686476 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1001 20:58:02.686670 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1001 20:58:02.687198 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I1001 20:58:02.690872 1 tlsconfig.go:240] Starting DynamicServingCertificateController I1001 20:58:02.786937 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== -- Logs begin at Thu 2020-10-01 20:57:43 UTC, end at Thu 2020-10-01 21:06:26 UTC. -- Oct 01 20:58:02 minikube kubelet[701]: E1001 20:58:02.670679 701 reflector.go:178] object-"kube-system"/"kube-proxy-token-xnqsn": Failed to list *v1.Secret: secrets "kube-proxy-token-xnqsn" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object Oct 01 20:58:02 minikube kubelet[701]: E1001 20:58:02.684536 701 reflector.go:178] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object Oct 01 20:58:02 minikube kubelet[701]: E1001 20:58:02.691417 701 reflector.go:178] object-"gcp-auth"/"gcp-auth-certs": Failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node "minikube" and this object Oct 01 20:58:02 minikube kubelet[701]: E1001 20:58:02.691549 701 reflector.go:178] object-"gcp-auth"/"default-token-ggzcp": Failed to list *v1.Secret: secrets "default-token-ggzcp" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node "minikube" and this object Oct 01 20:58:02 minikube kubelet[701]: E1001 20:58:02.693108 701 reflector.go:178] object-"default"/"default-token-hgb7f": Failed to list *v1.Secret: secrets "default-token-hgb7f" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "minikube" and this object Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.698774 701 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-942mx" (UniqueName: "kubernetes.io/secret/84c9e408-6f1f-4018-8818-8c347b123f33-storage-provisioner-token-942mx") pod "storage-provisioner" (UID: "84c9e408-6f1f-4018-8818-8c347b123f33") Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.698836 701 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/938a71ee-c19d-4693-803c-4724ffa18bbb-lib-modules") pod "kube-proxy-vdfcz" (UID: "938a71ee-c19d-4693-803c-4724ffa18bbb") Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.698863 701 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/18222200-1dbe-49d1-938a-f677800c4e90-webhook-certs") pod "gcp-auth-6df46599c7-w87sn" (UID: "18222200-1dbe-49d1-938a-f677800c4e90") Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.698888 701 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-ggzcp" (UniqueName: "kubernetes.io/secret/18222200-1dbe-49d1-938a-f677800c4e90-default-token-ggzcp") pod "gcp-auth-6df46599c7-w87sn" (UID: "18222200-1dbe-49d1-938a-f677800c4e90") Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.698908 701 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/84c9e408-6f1f-4018-8818-8c347b123f33-tmp") pod "storage-provisioner" (UID: "84c9e408-6f1f-4018-8818-8c347b123f33") Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.698921 701 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4c96b21d-cd0d-45d0-95c1-daf662e885de-config-volume") pod "coredns-66bff467f8-zd2cp" (UID: "4c96b21d-cd0d-45d0-95c1-daf662e885de") Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.698936 701 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-qp2cb" (UniqueName: "kubernetes.io/secret/4c96b21d-cd0d-45d0-95c1-daf662e885de-coredns-token-qp2cb") pod "coredns-66bff467f8-zd2cp" (UID: "4c96b21d-cd0d-45d0-95c1-daf662e885de") Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.698947 701 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/938a71ee-c19d-4693-803c-4724ffa18bbb-kube-proxy") pod "kube-proxy-vdfcz" (UID: "938a71ee-c19d-4693-803c-4724ffa18bbb") Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.698960 701 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/938a71ee-c19d-4693-803c-4724ffa18bbb-xtables-lock") pod "kube-proxy-vdfcz" (UID: "938a71ee-c19d-4693-803c-4724ffa18bbb") Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.698971 701 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-xnqsn" (UniqueName: "kubernetes.io/secret/938a71ee-c19d-4693-803c-4724ffa18bbb-kube-proxy-token-xnqsn") pod "kube-proxy-vdfcz" (UID: "938a71ee-c19d-4693-803c-4724ffa18bbb") Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.698983 701 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "gcp-project" (UniqueName: "kubernetes.io/host-path/18222200-1dbe-49d1-938a-f677800c4e90-gcp-project") pod "gcp-auth-6df46599c7-w87sn" (UID: "18222200-1dbe-49d1-938a-f677800c4e90") Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.698991 701 reconciler.go:157] Reconciler: start to sync state Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.773315 701 kubelet_node_status.go:112] Node minikube was previously registered Oct 01 20:58:02 minikube kubelet[701]: I1001 20:58:02.773429 701 kubelet_node_status.go:73] Successfully registered node minikube Oct 01 20:58:03 minikube kubelet[701]: I1001 20:58:03.277092 701 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-hgb7f" (UniqueName: "kubernetes.io/secret/0da959c0-0923-4a9d-b4fa-2655545e6e7a-default-token-hgb7f") pod "0da959c0-0923-4a9d-b4fa-2655545e6e7a" (UID: "0da959c0-0923-4a9d-b4fa-2655545e6e7a") Oct 01 20:58:03 minikube kubelet[701]: W1001 20:58:03.278214 701 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/0da959c0-0923-4a9d-b4fa-2655545e6e7a/volumes/kubernetes.io~secret/default-token-hgb7f: ClearQuota called, but quotas disabled Oct 01 20:58:03 minikube kubelet[701]: I1001 20:58:03.278580 701 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da959c0-0923-4a9d-b4fa-2655545e6e7a-default-token-hgb7f" (OuterVolumeSpecName: "default-token-hgb7f") pod "0da959c0-0923-4a9d-b4fa-2655545e6e7a" (UID: "0da959c0-0923-4a9d-b4fa-2655545e6e7a"). InnerVolumeSpecName "default-token-hgb7f". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 20:58:03 minikube kubelet[701]: I1001 20:58:03.378786 701 reconciler.go:319] Volume detached for volume "default-token-hgb7f" (UniqueName: "kubernetes.io/secret/0da959c0-0923-4a9d-b4fa-2655545e6e7a-default-token-hgb7f") on node "minikube" DevicePath "" Oct 01 20:58:04 minikube kubelet[701]: E1001 20:58:04.277981 701 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: failed to sync secret cache: timed out waiting for the condition Oct 01 20:58:04 minikube kubelet[701]: E1001 20:58:04.278092 701 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/18222200-1dbe-49d1-938a-f677800c4e90-webhook-certs podName:18222200-1dbe-49d1-938a-f677800c4e90 nodeName:}" failed. No retries permitted until 2020-10-01 20:58:04.778067994 +0000 UTC m=+13.006186339 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/18222200-1dbe-49d1-938a-f677800c4e90-webhook-certs\") pod \"gcp-auth-6df46599c7-w87sn\" (UID: \"18222200-1dbe-49d1-938a-f677800c4e90\") : failed to sync secret cache: timed out waiting for the condition" Oct 01 20:58:04 minikube kubelet[701]: E1001 20:58:04.278501 701 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Oct 01 20:58:04 minikube kubelet[701]: E1001 20:58:04.278537 701 secret.go:195] Couldn't get secret kube-system/coredns-token-qp2cb: failed to sync secret cache: timed out waiting for the condition Oct 01 20:58:04 minikube kubelet[701]: E1001 20:58:04.278561 701 secret.go:195] Couldn't get secret kube-system/kube-proxy-token-xnqsn: failed to sync secret cache: timed out waiting for the condition Oct 01 20:58:04 minikube kubelet[701]: E1001 20:58:04.278576 701 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/938a71ee-c19d-4693-803c-4724ffa18bbb-kube-proxy podName:938a71ee-c19d-4693-803c-4724ffa18bbb nodeName:}" failed. No retries permitted until 2020-10-01 20:58:04.77855628 +0000 UTC m=+13.006674628 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/938a71ee-c19d-4693-803c-4724ffa18bbb-kube-proxy\") pod \"kube-proxy-vdfcz\" (UID: \"938a71ee-c19d-4693-803c-4724ffa18bbb\") : failed to sync configmap cache: timed out waiting for the condition" Oct 01 20:58:04 minikube kubelet[701]: E1001 20:58:04.278599 701 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/4c96b21d-cd0d-45d0-95c1-daf662e885de-coredns-token-qp2cb podName:4c96b21d-cd0d-45d0-95c1-daf662e885de nodeName:}" failed. No retries permitted until 2020-10-01 20:58:04.77858604 +0000 UTC m=+13.006704372 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-qp2cb\" (UniqueName: \"kubernetes.io/secret/4c96b21d-cd0d-45d0-95c1-daf662e885de-coredns-token-qp2cb\") pod \"coredns-66bff467f8-zd2cp\" (UID: \"4c96b21d-cd0d-45d0-95c1-daf662e885de\") : failed to sync secret cache: timed out waiting for the condition" Oct 01 20:58:04 minikube kubelet[701]: E1001 20:58:04.278620 701 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/938a71ee-c19d-4693-803c-4724ffa18bbb-kube-proxy-token-xnqsn podName:938a71ee-c19d-4693-803c-4724ffa18bbb nodeName:}" failed. No retries permitted until 2020-10-01 20:58:04.77860952 +0000 UTC m=+13.006727855 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy-token-xnqsn\" (UniqueName: \"kubernetes.io/secret/938a71ee-c19d-4693-803c-4724ffa18bbb-kube-proxy-token-xnqsn\") pod \"kube-proxy-vdfcz\" (UID: \"938a71ee-c19d-4693-803c-4724ffa18bbb\") : failed to sync secret cache: timed out waiting for the condition" Oct 01 20:58:04 minikube kubelet[701]: E1001 20:58:04.280482 701 configmap.go:200] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Oct 01 20:58:04 minikube kubelet[701]: E1001 20:58:04.280559 701 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/4c96b21d-cd0d-45d0-95c1-daf662e885de-config-volume podName:4c96b21d-cd0d-45d0-95c1-daf662e885de nodeName:}" failed. No retries permitted until 2020-10-01 20:58:04.780541578 +0000 UTC m=+13.008659911 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c96b21d-cd0d-45d0-95c1-daf662e885de-config-volume\") pod \"coredns-66bff467f8-zd2cp\" (UID: \"4c96b21d-cd0d-45d0-95c1-daf662e885de\") : failed to sync configmap cache: timed out waiting for the condition" Oct 01 20:58:04 minikube kubelet[701]: W1001 20:58:04.874853 701 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-rc0286fd65a534f87abf57b37d39d0a81.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-rc0286fd65a534f87abf57b37d39d0a81.scope: no such file or directory Oct 01 20:58:04 minikube kubelet[701]: W1001 20:58:04.876146 701 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-rc0286fd65a534f87abf57b37d39d0a81.scope": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent: no such file or directory Oct 01 20:58:04 minikube kubelet[701]: W1001 20:58:04.876217 701 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-rc0286fd65a534f87abf57b37d39d0a81.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-rc0286fd65a534f87abf57b37d39d0a81.scope: no such file or directory Oct 01 20:58:05 minikube kubelet[701]: I1001 20:58:05.171433 701 request.go:621] Throttling request took 1.005122602s, request: GET:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&limit=500&resourceVersion=0 Oct 01 20:58:06 minikube kubelet[701]: W1001 20:58:06.164830 701 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-zd2cp through plugin: invalid network status for Oct 01 20:58:06 minikube kubelet[701]: I1001 20:58:06.262793 701 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0e2984a06c67cb9fe825d36aa59c5e87a97936b3d8dd12a2670f00261b95ad3d Oct 01 20:58:06 minikube kubelet[701]: I1001 20:58:06.262988 701 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ecfa591e4c922db7771f4269a47c3005b3ad0c24e2bc56e99a638b531d51ed49 Oct 01 20:58:06 minikube kubelet[701]: E1001 20:58:06.263560 701 pod_workers.go:191] Error syncing pod 84c9e408-6f1f-4018-8818-8c347b123f33 ("storage-provisioner_kube-system(84c9e408-6f1f-4018-8818-8c347b123f33)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(84c9e408-6f1f-4018-8818-8c347b123f33)" Oct 01 20:58:06 minikube kubelet[701]: W1001 20:58:06.412261 701 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-6df46599c7-w87sn through plugin: invalid network status for Oct 01 20:58:06 minikube kubelet[701]: W1001 20:58:06.413330 701 pod_container_deletor.go:77] Container "9c8e9b59ee3f696bbd552e27abcc8c6f8bbc5fdb115e9074fd80fda728ce45c5" not found in pod's containers Oct 01 20:58:06 minikube kubelet[701]: W1001 20:58:06.417737 701 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-zd2cp through plugin: invalid network status for Oct 01 20:58:07 minikube kubelet[701]: W1001 20:58:07.441437 701 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-6df46599c7-w87sn through plugin: invalid network status for Oct 01 20:58:07 minikube kubelet[701]: W1001 20:58:07.450048 701 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-zd2cp through plugin: invalid network status for Oct 01 20:58:07 minikube kubelet[701]: I1001 20:58:07.461406 701 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ecfa591e4c922db7771f4269a47c3005b3ad0c24e2bc56e99a638b531d51ed49 Oct 01 20:58:07 minikube kubelet[701]: E1001 20:58:07.461956 701 pod_workers.go:191] Error syncing pod 84c9e408-6f1f-4018-8818-8c347b123f33 ("storage-provisioner_kube-system(84c9e408-6f1f-4018-8818-8c347b123f33)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(84c9e408-6f1f-4018-8818-8c347b123f33)" Oct 01 20:58:07 minikube kubelet[701]: E1001 20:58:07.600276 701 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 01 20:58:07 minikube kubelet[701]: E1001 20:58:07.600439 701 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 01 20:58:17 minikube kubelet[701]: E1001 20:58:17.614599 701 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 01 20:58:17 minikube kubelet[701]: E1001 20:58:17.614646 701 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 01 20:58:22 minikube kubelet[701]: I1001 20:58:22.392230 701 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ecfa591e4c922db7771f4269a47c3005b3ad0c24e2bc56e99a638b531d51ed49 Oct 01 20:58:27 minikube kubelet[701]: E1001 20:58:27.628127 701 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 01 20:58:27 minikube kubelet[701]: E1001 20:58:27.628178 701 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 01 20:58:37 minikube kubelet[701]: E1001 20:58:37.643070 701 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 01 20:58:37 minikube kubelet[701]: E1001 20:58:37.643510 701 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 01 20:58:47 minikube kubelet[701]: E1001 20:58:47.656046 701 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 01 20:58:47 minikube kubelet[701]: E1001 20:58:47.656066 701 helpers.go:680] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 01 20:58:57 minikube kubelet[701]: I1001 20:58:57.632195 701 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 0c38f013a09fb39f689fed3c0c1cb80647698b404e5f1163e7f63cfac9611e58 ==> storage-provisioner [0f3366b353ae] <== I1001 20:58:22.543688 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I1001 20:58:39.946789 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I1001 20:58:39.947026 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_ef826feb-3814-4d63-9150-4c6790321ae1! I1001 20:58:39.947002 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5a5262fa-7b8b-4465-b8c5-6f3ad8807feb", APIVersion:"v1", ResourceVersion:"76106", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_ef826feb-3814-4d63-9150-4c6790321ae1 became leader I1001 20:58:40.047349 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_ef826feb-3814-4d63-9150-4c6790321ae1! ==> storage-provisioner [ecfa591e4c92] <== F1001 20:58:05.108855 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": x509: certificate signed by unknown authority ```
matthewmichihara commented 4 years ago

I'm also seeing this on the latest HEAD version of minikube, obtained from https://storage.googleapis.com/minikube-builds/master/minikube-darwin-amd64:

$ ./minikube version
minikube version: v1.13.1
commit: 0fc0c82337876cff676790bec417f7d674e63eb9
sharifelgamal commented 4 years ago

For some reason, the gcp-auth addon is trying to inject credentials into the storage-provisioner pod on cluster restart even though I've explicitly excluded the kube-system namespace entirely. I'll look into why that's happening.