kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
28.8k stars 4.82k forks source link

Minikube gcp-auth addon causes 'storage-provisioner' errors #9392

Closed matthewmichihara closed 3 years ago

matthewmichihara commented 3 years ago

I'm running a build with the fix for https://github.com/kubernetes/minikube/issues/9371, but still encountering this error:

$ ./minikube version
minikube version: v1.13.1
commit: bc3db0d76816d4a8068b9a7796def3c7572cb595
$ ./minikube delete --all --purge
🔥  Deleting "minikube" in docker ...
🔥  Removing /Users/michihara/.minikube/machines/minikube ...
💀  Removed all traces of the "minikube" cluster.
🔥  Successfully deleted all profiles
💀  Successfully purged minikube directory located at - [/Users/michihara/.minikube]
$ ./minikube start
😄  minikube v1.13.1 on Darwin 10.15.7
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.19.2 preload ...
    > preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 486.36 MiB
🔥  Creating docker container (CPUs=2, Memory=3892MB) ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" by default
$ minikube addons enable gcp-auth
🔎  Verifying gcp-auth addon...
📌  Your GCP credentials will now be mounted into every pod created in the minikube cluster.
📌  If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
🌟  The 'gcp-auth' addon is enabled
$ ./minikube stop
✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
🛑  1 nodes stopped.
$ ./minikube start
😄  minikube v1.13.1 on Darwin 10.15.7
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
❗  Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
serviceaccount/storage-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
endpoints/k8s.io-minikube-hostpath unchanged

stderr:
The Pod "storage-provisioner" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
  core.PodSpec{
    Volumes: []core.Volume{
        {Name: "tmp", VolumeSource: core.VolumeSource{HostPath: &core.HostPathVolumeSource{Path: "/tmp", Type: &"Directory"}}},
        {Name: "storage-provisioner-token-6jk6m", VolumeSource: core.VolumeSource{Secret: &core.SecretVolumeSource{SecretName: "storage-provisioner-token-6jk6m", DefaultMode: &420}}},
-       {
-           Name: "gcp-creds",
-           VolumeSource: core.VolumeSource{
-               HostPath: &core.HostPathVolumeSource{Path: "/var/lib/minikube/google_application_credentials.json", Type: &"File"},
-           },
-       },
    },
    InitContainers: nil,
    Containers: []core.Container{
        {
            ... // 5 identical fields
            Ports:   nil,
            EnvFrom: nil,
-           Env: []core.EnvVar{
-               {Name: "GOOGLE_APPLICATION_CREDENTIALS", Value: "/google-app-creds.json"},
-               {Name: "PROJECT_ID", Value: "chelseamarket"},
-               {Name: "GCP_PROJECT", Value: "chelseamarket"},
-               {Name: "GCLOUD_PROJECT", Value: "chelseamarket"},
-               {Name: "GOOGLE_CLOUD_PROJECT", Value: "chelseamarket"},
-               {Name: "CLOUDSDK_CORE_PROJECT", Value: "chelseamarket"},
-           },
+           Env:       nil,
            Resources: core.ResourceRequirements{},
            VolumeMounts: []core.VolumeMount{
                {Name: "tmp", MountPath: "/tmp"},
                {Name: "storage-provisioner-token-6jk6m", ReadOnly: true, MountPath: "/var/run/secrets/kubernetes.io/serviceaccount"},
-               {Name: "gcp-creds", ReadOnly: true, MountPath: "/google-app-creds.json"},
            },
            VolumeDevices: nil,
            LivenessProbe: nil,
            ... // 10 identical fields
        },
    },
    EphemeralContainers: nil,
    RestartPolicy:       "Always",
    ... // 25 identical fields
  }

]
🔎  Verifying gcp-auth addon...
📌  Your GCP credentials will now be mounted into every pod created in the minikube cluster.
📌  If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
🌟  Enabled addons: default-storageclass, gcp-auth
🏄  Done! kubectl is now configured to use "minikube" by default

Optional: Full output of minikube logs command:

``` ==> Docker <== -- Logs begin at Mon 2020-10-05 16:49:59 UTC, end at Mon 2020-10-05 16:58:00 UTC. -- Oct 05 16:49:59 minikube systemd[1]: Starting Docker Application Container Engine... Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.446989463Z" level=info msg="Starting up" Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.449484263Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.449545275Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.449563748Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.449570879Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.452003303Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.452056688Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.452092819Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.452101612Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.466150137Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.482619200Z" level=info msg="Loading containers: start." Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.643618282Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.708049322Z" level=info msg="Loading containers: done." Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.733585013Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.733807775Z" level=info msg="Daemon has completed initialization" Oct 05 16:49:59 minikube systemd[1]: Started Docker Application Container Engine. Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.762546796Z" level=info msg="API listen on /var/run/docker.sock" Oct 05 16:49:59 minikube dockerd[162]: time="2020-10-05T16:49:59.762689678Z" level=info msg="API listen on [::]:2376" Oct 05 16:50:14 minikube dockerd[162]: time="2020-10-05T16:50:14.204374463Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Oct 05 16:52:44 minikube dockerd[162]: time="2020-10-05T16:52:44.100212296Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Oct 05 16:52:44 minikube dockerd[162]: time="2020-10-05T16:52:44.201960238Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID c9af8a135eba8 gcr.io/k8s-minikube/gcp-auth-webhook@sha256:af4ba05354a42a4e93ad27209c64eba0e004e3265345fc5267d78f89a7bffda8 5 minutes ago Running gcp-auth 0 776a49667eb3e 3e5da98dae217 bad58561c4be7 7 minutes ago Running storage-provisioner 2 de39b705eccae 47f5ca35bed8f bad58561c4be7 7 minutes ago Exited storage-provisioner 1 de39b705eccae fd2e4ebc4509c d373dd5a8593a 7 minutes ago Running kube-proxy 1 fa7880d70d9e3 a9e79d41f1c8c bfe3a36ebd252 7 minutes ago Running coredns 1 cdfc57638bc94 5aade0dfa8ee1 2f32d66b884f8 7 minutes ago Running kube-scheduler 1 a6b79f529b38f 5aed2b99bf381 8603821e1a7a5 7 minutes ago Running kube-controller-manager 1 cb3614761c7b9 27e6884fa4288 607331163122e 7 minutes ago Running kube-apiserver 1 5022175263bb0 9e188953975b0 0369cf4303ffd 7 minutes ago Running etcd 1 de543950eb617 feb4ab880c442 jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689 8 minutes ago Exited patch 0 58196f9be98cc 0d8db80378ac7 jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689 8 minutes ago Exited create 0 b53085deb4c49 46a5f3e3be273 bfe3a36ebd252 8 minutes ago Exited coredns 0 5d348573edddd 7765a9fbc9d08 d373dd5a8593a 8 minutes ago Exited kube-proxy 0 01cb7258f2010 29e1bbb9b371f 0369cf4303ffd 9 minutes ago Exited etcd 0 7a0b3bd5caea1 5d65180d88fd8 607331163122e 9 minutes ago Exited kube-apiserver 0 68d5323ee8674 65ad5a93949b5 2f32d66b884f8 9 minutes ago Exited kube-scheduler 0 ca64be62cff0b e1624656021b1 8603821e1a7a5 9 minutes ago Exited kube-controller-manager 0 6f86d912cf62d ==> coredns [46a5f3e3be27] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s ==> coredns [a9e79d41f1c8] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d E1005 16:50:12.167428 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E1005 16:50:12.169681 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E1005 16:50:12.170212 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E1005 16:50:13.345683 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E1005 16:50:13.552396 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E1005 16:50:13.756457 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=bc3db0d76816d4a8068b9a7796def3c7572cb595 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_10_05T09_49_08_0700 minikube.k8s.io/version=v1.13.1 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 05 Oct 2020 16:49:05 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Mon, 05 Oct 2020 16:57:51 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Mon, 05 Oct 2020 16:53:11 +0000 Mon, 05 Oct 2020 16:49:01 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 05 Oct 2020 16:53:11 +0000 Mon, 05 Oct 2020 16:49:01 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 05 Oct 2020 16:53:11 +0000 Mon, 05 Oct 2020 16:49:01 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 05 Oct 2020 16:53:11 +0000 Mon, 05 Oct 2020 16:49:19 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 4 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4035056Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4035056Ki pods: 110 System Info: Machine ID: 6b7d8d475cca4094a65aff8f895a866c System UUID: fb5c4459-a765-4e32-b611-425a0b691bb2 Boot ID: e087e526-77d9-49e4-b7a5-b1e02e486261 Kernel Version: 4.19.76-linuxkit OS Image: Ubuntu 20.04 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.8 Kubelet Version: v1.19.2 Kube-Proxy Version: v1.19.2 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- gcp-auth gcp-auth-74f9689fd7-lfln7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m21s kube-system coredns-f9fd979d6-5bhwq 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 8m47s kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m52s kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 8m52s kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 8m52s kube-system kube-proxy-pr669 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m47s kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 8m52s kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m52s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (16%) 0 (0%) memory 70Mi (1%) 170Mi (4%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 9m2s (x4 over 9m2s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 9m2s (x3 over 9m2s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 9m2s (x4 over 9m2s) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 9m2s kubelet Updated Node Allocatable limit across pods Normal Starting 9m2s kubelet Starting kubelet. Normal NodeHasSufficientMemory 8m53s kubelet Node minikube status is now: NodeHasSufficientMemory Normal Starting 8m53s kubelet Starting kubelet. Normal NodeHasNoDiskPressure 8m53s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 8m53s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 8m52s kubelet Updated Node Allocatable limit across pods Normal Starting 8m46s kube-proxy Starting kube-proxy. Warning readOnlySysFS 8m46s kube-proxy CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) Normal NodeReady 8m42s kubelet Node minikube status is now: NodeReady Normal Starting 7m56s kubelet Starting kubelet. Normal NodeHasSufficientMemory 7m56s (x8 over 7m56s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 7m56s (x8 over 7m56s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 7m56s (x7 over 7m56s) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 7m56s kubelet Updated Node Allocatable limit across pods Warning readOnlySysFS 7m47s kube-proxy CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) Normal Starting 7m47s kube-proxy Starting kube-proxy. ==> dmesg <== [Oct 5 16:32] virtio-pci 0000:00:01.0: can't derive routing for PCI INT A [ +0.000757] virtio-pci 0000:00:01.0: PCI INT A: no GSI [ +0.001514] virtio-pci 0000:00:02.0: can't derive routing for PCI INT A [ +0.000795] virtio-pci 0000:00:02.0: PCI INT A: no GSI [ +0.002535] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A [ +0.000889] virtio-pci 0000:00:07.0: PCI INT A: no GSI [ +0.055061] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds). [ +0.614402] i8042: Can't read CTR while initializing i8042 [ +0.000670] i8042: probe of i8042 failed with error -5 [ +0.003384] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184) [ +0.000958] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620) [ +0.240611] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ +0.021464] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ +3.341252] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ +0.076143] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! ==> etcd [29e1bbb9b371] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-10-05 16:49:00.671093 I | etcdmain: etcd Version: 3.4.13 2020-10-05 16:49:00.671135 I | etcdmain: Git SHA: ae9734ed2 2020-10-05 16:49:00.671138 I | etcdmain: Go Version: go1.12.17 2020-10-05 16:49:00.671140 I | etcdmain: Go OS/Arch: linux/amd64 2020-10-05 16:49:00.671153 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-10-05 16:49:00.671247 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-10-05 16:49:00.671863 I | embed: name = minikube 2020-10-05 16:49:00.671890 I | embed: data dir = /var/lib/minikube/etcd 2020-10-05 16:49:00.671894 I | embed: member dir = /var/lib/minikube/etcd/member 2020-10-05 16:49:00.671896 I | embed: heartbeat = 100ms 2020-10-05 16:49:00.671898 I | embed: election = 1000ms 2020-10-05 16:49:00.671900 I | embed: snapshot count = 10000 2020-10-05 16:49:00.671909 I | embed: advertise client URLs = https://192.168.49.2:2379 2020-10-05 16:49:00.679970 I | etcdserver: starting member aec36adc501070cc in cluster fa54960ea34d58be raft2020/10/05 16:49:00 INFO: aec36adc501070cc switched to configuration voters=() raft2020/10/05 16:49:00 INFO: aec36adc501070cc became follower at term 0 raft2020/10/05 16:49:00 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2020/10/05 16:49:00 INFO: aec36adc501070cc became follower at term 1 raft2020/10/05 16:49:00 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2020-10-05 16:49:00.693212 W | auth: simple token is not cryptographically signed 2020-10-05 16:49:00.768774 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] 2020-10-05 16:49:00.769851 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10) raft2020/10/05 16:49:00 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2020-10-05 16:49:00.770507 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be 2020-10-05 16:49:00.773125 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-10-05 16:49:00.773264 I | embed: listening for peers on 192.168.49.2:2380 2020-10-05 16:49:00.787568 I | embed: listening for metrics on http://127.0.0.1:2381 raft2020/10/05 16:49:00 INFO: aec36adc501070cc is starting a new election at term 1 raft2020/10/05 16:49:00 INFO: aec36adc501070cc became candidate at term 2 raft2020/10/05 16:49:00 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2 raft2020/10/05 16:49:00 INFO: aec36adc501070cc became leader at term 2 raft2020/10/05 16:49:00 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2 2020-10-05 16:49:00.893306 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be 2020-10-05 16:49:00.893636 I | embed: ready to serve client requests 2020-10-05 16:49:00.894722 I | etcdserver: setting up the initial cluster version to 3.4 2020-10-05 16:49:00.962218 I | embed: serving client requests on 127.0.0.1:2379 2020-10-05 16:49:00.962506 I | embed: ready to serve client requests 2020-10-05 16:49:00.966040 I | embed: serving client requests on 192.168.49.2:2379 2020-10-05 16:49:00.972399 N | etcdserver/membership: set the initial cluster version to 3.4 2020-10-05 16:49:00.979426 I | etcdserver/api: enabled capabilities for version 3.4 2020-10-05 16:49:15.556333 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:49:18.394552 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:49:28.395087 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:49:38.395245 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:49:43.349727 N | pkg/osutil: received terminated signal, shutting down... WARNING: 2020/10/05 16:49:43 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... WARNING: 2020/10/05 16:49:43 grpc: addrConn.createTransport failed to connect to {192.168.49.2:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.49.2:2379: connect: connection refused". Reconnecting... 2020-10-05 16:49:43.370044 I | etcdserver: skipped leadership transfer for single voting member cluster ==> etcd [9e188953975b] <== 2020-10-05 16:50:06.798130 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-10-05 16:50:06.798277 I | embed: listening for metrics on http://127.0.0.1:2381 2020-10-05 16:50:06.798341 I | embed: listening for peers on 192.168.49.2:2380 raft2020/10/05 16:50:08 INFO: aec36adc501070cc is starting a new election at term 2 raft2020/10/05 16:50:08 INFO: aec36adc501070cc became candidate at term 3 raft2020/10/05 16:50:08 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3 raft2020/10/05 16:50:08 INFO: aec36adc501070cc became leader at term 3 raft2020/10/05 16:50:08 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3 2020-10-05 16:50:08.288188 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be 2020-10-05 16:50:08.288267 I | embed: ready to serve client requests 2020-10-05 16:50:08.288288 I | embed: ready to serve client requests 2020-10-05 16:50:08.290018 I | embed: serving client requests on 192.168.49.2:2379 2020-10-05 16:50:08.290122 I | embed: serving client requests on 127.0.0.1:2379 2020-10-05 16:50:23.291736 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:50:31.761964 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:50:41.762478 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:50:51.762775 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:51:01.762335 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:51:11.763038 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:51:21.763787 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:51:31.762898 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:51:41.761432 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:51:51.762271 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:52:01.762229 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:52:11.763250 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:52:21.762466 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:52:31.764366 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:52:41.763866 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:52:51.763506 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:53:01.764893 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:53:11.763338 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:53:21.765729 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:53:31.763316 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:53:41.763541 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:53:51.764587 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:54:01.764210 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:54:11.764902 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:54:21.764005 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:54:31.765305 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:54:41.764663 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:54:51.764269 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:55:01.765518 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:55:11.764574 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:55:21.765993 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:55:31.765120 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:55:41.764880 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:55:51.765593 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:56:01.765837 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:56:11.765698 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:56:21.765518 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:56:31.766421 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:56:41.766529 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:56:51.766335 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:57:01.767213 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:57:11.768134 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:57:21.767317 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:57:31.768458 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:57:41.767832 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:57:51.768199 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-05 16:58:01.767735 I | etcdserver/api/etcdhttp: /health OK (status code 200) ==> kernel <== 16:58:04 up 25 min, 0 users, load average: 0.54, 0.73, 0.58 Linux minikube 4.19.76-linuxkit #1 SMP Tue May 26 11:42:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04 LTS" ==> kube-apiserver [27e6884fa428] <== E1005 16:50:11.157228 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service I1005 16:50:11.168275 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I1005 16:50:11.240599 1 cache.go:39] Caches are synced for autoregister controller I1005 16:50:11.241142 1 cache.go:39] Caches are synced for AvailableConditionController controller I1005 16:50:12.031845 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1005 16:50:12.031913 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I1005 16:50:12.039976 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I1005 16:50:12.578469 1 controller.go:606] quota admission added evaluator for: serviceaccounts I1005 16:50:12.591305 1 controller.go:606] quota admission added evaluator for: deployments.apps I1005 16:50:12.626765 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I1005 16:50:12.640007 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I1005 16:50:12.645622 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I1005 16:50:13.991136 1 controller.go:606] quota admission added evaluator for: endpoints I1005 16:50:17.975393 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io I1005 16:50:24.051117 1 trace.go:205] Trace[157413034]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:UPDATE,UID:26c258cf-f7ab-4b04-9a59-3f412e1f9761 (05-Oct-2020 16:50:14.050) (total time: 10000ms): Trace[157413034]: [10.00038986s] [10.00038986s] END W1005 16:50:24.051266 1 dispatcher.go:182] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.172.84:443: i/o timeout I1005 16:50:24.072840 1 trace.go:205] Trace[1347558046]: "GuaranteedUpdate etcd3" type:*core.Pod (05-Oct-2020 16:50:14.045) (total time: 10027ms): Trace[1347558046]: [10.027057583s] [10.027057583s] END I1005 16:50:24.073074 1 trace.go:205] Trace[189572777]: "Patch" url:/api/v1/namespaces/kube-system/pods/storage-provisioner,user-agent:kubectl/v1.19.2 (linux/amd64) kubernetes/f574309,client:127.0.0.1 (05-Oct-2020 16:50:14.045) (total time: 10027ms): Trace[189572777]: ---"About to apply patch" 10003ms (16:50:00.052) Trace[189572777]: [10.027462493s] [10.027462493s] END I1005 16:50:46.424403 1 client.go:360] parsed scheme: "passthrough" I1005 16:50:46.424464 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1005 16:50:46.424476 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1005 16:51:30.733046 1 client.go:360] parsed scheme: "passthrough" I1005 16:51:30.733175 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1005 16:51:30.733219 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1005 16:52:02.528299 1 client.go:360] parsed scheme: "passthrough" I1005 16:52:02.528438 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1005 16:52:02.528453 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1005 16:52:33.120315 1 client.go:360] parsed scheme: "passthrough" I1005 16:52:33.120420 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1005 16:52:33.120435 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1005 16:52:40.162224 1 controller.go:606] quota admission added evaluator for: replicasets.apps I1005 16:52:40.182983 1 controller.go:606] quota admission added evaluator for: events.events.k8s.io I1005 16:53:10.351323 1 client.go:360] parsed scheme: "passthrough" I1005 16:53:10.351479 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1005 16:53:10.351529 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1005 16:53:45.523490 1 client.go:360] parsed scheme: "passthrough" I1005 16:53:45.523602 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1005 16:53:45.523617 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1005 16:54:23.290269 1 client.go:360] parsed scheme: "passthrough" I1005 16:54:23.290359 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1005 16:54:23.290373 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1005 16:54:56.821237 1 client.go:360] parsed scheme: "passthrough" I1005 16:54:56.821809 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1005 16:54:56.821954 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1005 16:55:34.733891 1 client.go:360] parsed scheme: "passthrough" I1005 16:55:34.734116 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1005 16:55:34.734217 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1005 16:56:11.334568 1 client.go:360] parsed scheme: "passthrough" I1005 16:56:11.334678 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1005 16:56:11.334694 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1005 16:56:46.863134 1 client.go:360] parsed scheme: "passthrough" I1005 16:56:46.863312 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1005 16:56:46.863366 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1005 16:57:27.647292 1 client.go:360] parsed scheme: "passthrough" I1005 16:57:27.647362 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1005 16:57:27.647375 1 clientconn.go:948] ClientConn switching balancer to "pick_first" ==> kube-apiserver [5d65180d88fd] <== W1005 16:49:51.796740 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:51.962135 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:51.965151 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:51.982675 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:51.983981 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.007588 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.014998 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.049526 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.068424 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.070641 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.092808 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.130745 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.136461 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.137738 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.188407 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.206947 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.210875 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.344846 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.361537 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.385796 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.398138 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.417931 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.464553 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.479726 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.492909 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.506957 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.516186 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.559960 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.614131 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.630698 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.634687 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.671933 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.703086 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.723726 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.729441 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.764108 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.764659 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.788290 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.796091 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.808196 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.821936 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.824636 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.876938 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.884664 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.911094 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:52.944930 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.019180 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.042021 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.051594 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.064740 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.079145 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.087629 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.097721 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.117025 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.135583 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.166981 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.305143 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.338677 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.364042 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1005 16:49:53.381542 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... ==> kube-controller-manager [5aed2b99bf38] <== I1005 16:50:16.950329 1 shared_informer.go:240] Waiting for caches to sync for deployment I1005 16:50:17.249474 1 controllermanager.go:549] Started "disruption" I1005 16:50:17.249556 1 disruption.go:331] Starting disruption controller I1005 16:50:17.249564 1 shared_informer.go:240] Waiting for caches to sync for disruption I1005 16:50:17.404147 1 controllermanager.go:549] Started "namespace" I1005 16:50:17.404213 1 namespace_controller.go:200] Starting namespace controller I1005 16:50:17.404219 1 shared_informer.go:240] Waiting for caches to sync for namespace I1005 16:50:17.552355 1 controllermanager.go:549] Started "job" I1005 16:50:17.552535 1 job_controller.go:148] Starting job controller I1005 16:50:17.552541 1 shared_informer.go:240] Waiting for caches to sync for job I1005 16:50:17.698804 1 controllermanager.go:549] Started "bootstrapsigner" I1005 16:50:17.698876 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer E1005 16:50:17.852424 1 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W1005 16:50:17.852483 1 controllermanager.go:541] Skipping "service" I1005 16:50:17.852782 1 shared_informer.go:240] Waiting for caches to sync for resource quota W1005 16:50:17.863526 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1005 16:50:17.878686 1 shared_informer.go:247] Caches are synced for HPA I1005 16:50:17.897793 1 shared_informer.go:247] Caches are synced for taint I1005 16:50:17.897865 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W1005 16:50:17.898024 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. I1005 16:50:17.898110 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal. I1005 16:50:17.898031 1 taint_manager.go:187] Starting NoExecuteTaintManager I1005 16:50:17.898511 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I1005 16:50:17.898574 1 shared_informer.go:247] Caches are synced for GC I1005 16:50:17.898868 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1005 16:50:17.898904 1 shared_informer.go:247] Caches are synced for bootstrap_signer I1005 16:50:17.899110 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1005 16:50:17.899124 1 shared_informer.go:247] Caches are synced for endpoint I1005 16:50:17.900788 1 shared_informer.go:247] Caches are synced for PV protection I1005 16:50:17.900989 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1005 16:50:17.901086 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1005 16:50:17.901834 1 shared_informer.go:247] Caches are synced for service account I1005 16:50:17.904295 1 shared_informer.go:247] Caches are synced for namespace I1005 16:50:17.949250 1 shared_informer.go:247] Caches are synced for ReplicaSet I1005 16:50:17.950137 1 shared_informer.go:247] Caches are synced for TTL I1005 16:50:17.951143 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1005 16:50:17.951855 1 shared_informer.go:247] Caches are synced for ReplicationController I1005 16:50:17.951930 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I1005 16:50:17.951950 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1005 16:50:17.952598 1 shared_informer.go:247] Caches are synced for job I1005 16:50:17.971771 1 shared_informer.go:247] Caches are synced for endpoint_slice I1005 16:50:17.990298 1 shared_informer.go:247] Caches are synced for attach detach I1005 16:50:17.990996 1 shared_informer.go:247] Caches are synced for daemon sets I1005 16:50:17.999124 1 shared_informer.go:247] Caches are synced for stateful set I1005 16:50:18.007601 1 shared_informer.go:247] Caches are synced for expand I1005 16:50:18.048972 1 shared_informer.go:247] Caches are synced for persistent volume I1005 16:50:18.049709 1 shared_informer.go:247] Caches are synced for disruption I1005 16:50:18.049735 1 disruption.go:339] Sending events to api server. I1005 16:50:18.050482 1 shared_informer.go:247] Caches are synced for deployment I1005 16:50:18.053950 1 shared_informer.go:247] Caches are synced for PVC protection I1005 16:50:18.148126 1 shared_informer.go:247] Caches are synced for resource quota I1005 16:50:18.153311 1 shared_informer.go:247] Caches are synced for resource quota I1005 16:50:18.216606 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1005 16:50:18.474824 1 shared_informer.go:247] Caches are synced for garbage collector I1005 16:50:18.475202 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1005 16:50:18.516966 1 shared_informer.go:247] Caches are synced for garbage collector I1005 16:52:40.168603 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-74f9689fd7 to 1" I1005 16:52:40.178613 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-74f9689fd7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-74f9689fd7-lfln7" I1005 16:52:43.994698 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set gcp-auth-5c58dc7db8 to 0" I1005 16:52:44.004350 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-5c58dc7db8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: gcp-auth-5c58dc7db8-bl5hm" ==> kube-controller-manager [e1624656021b] <== I1005 16:49:14.444625 1 shared_informer.go:240] Waiting for caches to sync for deployment I1005 16:49:14.570240 1 controllermanager.go:549] Started "bootstrapsigner" I1005 16:49:14.570605 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer W1005 16:49:14.583799 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1005 16:49:14.618962 1 shared_informer.go:247] Caches are synced for HPA I1005 16:49:14.620699 1 shared_informer.go:247] Caches are synced for taint I1005 16:49:14.620770 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W1005 16:49:14.620803 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. I1005 16:49:14.620827 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I1005 16:49:14.620939 1 taint_manager.go:187] Starting NoExecuteTaintManager I1005 16:49:14.621067 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I1005 16:49:14.621084 1 shared_informer.go:247] Caches are synced for endpoint_slice I1005 16:49:14.621349 1 shared_informer.go:247] Caches are synced for persistent volume I1005 16:49:14.621805 1 shared_informer.go:247] Caches are synced for daemon sets I1005 16:49:14.623813 1 shared_informer.go:247] Caches are synced for TTL I1005 16:49:14.628564 1 shared_informer.go:247] Caches are synced for expand I1005 16:49:14.640401 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1005 16:49:14.648020 1 shared_informer.go:247] Caches are synced for deployment I1005 16:49:14.648768 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pr669" I1005 16:49:14.653226 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1" I1005 16:49:14.654854 1 shared_informer.go:247] Caches are synced for ReplicaSet I1005 16:49:14.668589 1 shared_informer.go:247] Caches are synced for PV protection I1005 16:49:14.669061 1 shared_informer.go:247] Caches are synced for disruption I1005 16:49:14.669074 1 disruption.go:339] Sending events to api server. I1005 16:49:14.669557 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I1005 16:49:14.669764 1 shared_informer.go:247] Caches are synced for job I1005 16:49:14.670776 1 shared_informer.go:247] Caches are synced for PVC protection I1005 16:49:14.671953 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1005 16:49:14.672230 1 shared_informer.go:247] Caches are synced for stateful set I1005 16:49:14.672570 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1005 16:49:14.672774 1 shared_informer.go:247] Caches are synced for GC I1005 16:49:14.672848 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1005 16:49:14.673476 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1005 16:49:14.673501 1 shared_informer.go:247] Caches are synced for ReplicationController E1005 16:49:14.675171 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again E1005 16:49:14.676229 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again I1005 16:49:14.693570 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-5bhwq" E1005 16:49:14.695868 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again I1005 16:49:14.719294 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1005 16:49:14.719622 1 shared_informer.go:247] Caches are synced for endpoint I1005 16:49:14.771124 1 shared_informer.go:247] Caches are synced for bootstrap_signer I1005 16:49:14.804177 1 shared_informer.go:247] Caches are synced for namespace I1005 16:49:14.821833 1 shared_informer.go:247] Caches are synced for service account I1005 16:49:14.825340 1 shared_informer.go:247] Caches are synced for resource quota I1005 16:49:14.870276 1 shared_informer.go:247] Caches are synced for attach detach I1005 16:49:14.925165 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1005 16:49:15.225552 1 shared_informer.go:247] Caches are synced for garbage collector I1005 16:49:15.270590 1 shared_informer.go:247] Caches are synced for garbage collector I1005 16:49:15.270622 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1005 16:49:15.571014 1 request.go:645] Throttling request took 1.049187541s, request: GET:https://192.168.49.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s I1005 16:49:16.271548 1 shared_informer.go:240] Waiting for caches to sync for resource quota I1005 16:49:16.271592 1 shared_informer.go:247] Caches are synced for resource quota I1005 16:49:19.656304 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. I1005 16:49:19.656769 1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner" I1005 16:49:23.018088 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-create-tbxpd" I1005 16:49:23.064012 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-5c58dc7db8 to 1" I1005 16:49:23.078949 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-5c58dc7db8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-5c58dc7db8-bl5hm" I1005 16:49:23.094728 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-patch-crn8z" I1005 16:49:29.286681 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I1005 16:49:31.331786 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" ==> kube-proxy [7765a9fbc9d0] <== I1005 16:49:15.424787 1 node.go:136] Successfully retrieved node IP: 192.168.49.2 I1005 16:49:15.424869 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation W1005 16:49:15.443730 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy I1005 16:49:15.444083 1 server_others.go:186] Using iptables Proxier. W1005 16:49:15.444110 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I1005 16:49:15.444114 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I1005 16:49:15.444855 1 server.go:650] Version: v1.19.2 I1005 16:49:15.446083 1 conntrack.go:52] Setting nf_conntrack_max to 131072 E1005 16:49:15.447116 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime]) I1005 16:49:15.447432 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1005 16:49:15.447767 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1005 16:49:15.450305 1 config.go:315] Starting service config controller I1005 16:49:15.450333 1 shared_informer.go:240] Waiting for caches to sync for service config I1005 16:49:15.451032 1 config.go:224] Starting endpoint slice config controller I1005 16:49:15.451058 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I1005 16:49:15.551036 1 shared_informer.go:247] Caches are synced for service config I1005 16:49:15.551218 1 shared_informer.go:247] Caches are synced for endpoint slice config ==> kube-proxy [fd2e4ebc4509] <== I1005 16:50:13.953285 1 node.go:136] Successfully retrieved node IP: 192.168.49.2 I1005 16:50:13.953716 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation W1005 16:50:14.142401 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy I1005 16:50:14.144189 1 server_others.go:186] Using iptables Proxier. W1005 16:50:14.144771 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I1005 16:50:14.145106 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I1005 16:50:14.148714 1 server.go:650] Version: v1.19.2 I1005 16:50:14.168042 1 conntrack.go:52] Setting nf_conntrack_max to 131072 E1005 16:50:14.168945 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime]) I1005 16:50:14.169506 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1005 16:50:14.170217 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1005 16:50:14.171245 1 config.go:315] Starting service config controller I1005 16:50:14.171262 1 shared_informer.go:240] Waiting for caches to sync for service config I1005 16:50:14.171278 1 config.go:224] Starting endpoint slice config controller I1005 16:50:14.171280 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I1005 16:50:14.271489 1 shared_informer.go:247] Caches are synced for service config I1005 16:50:14.272700 1 shared_informer.go:247] Caches are synced for endpoint slice config ==> kube-scheduler [5aade0dfa8ee] <== I1005 16:50:07.074302 1 registry.go:173] Registering SelectorSpread plugin I1005 16:50:07.074359 1 registry.go:173] Registering SelectorSpread plugin I1005 16:50:07.707743 1 serving.go:331] Generated self-signed cert in-memory I1005 16:50:11.175418 1 registry.go:173] Registering SelectorSpread plugin I1005 16:50:11.175448 1 registry.go:173] Registering SelectorSpread plugin I1005 16:50:11.180178 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController I1005 16:50:11.180307 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController I1005 16:50:11.180438 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I1005 16:50:11.180478 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I1005 16:50:11.182110 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1005 16:50:11.182156 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1005 16:50:11.184342 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I1005 16:50:11.184526 1 tlsconfig.go:240] Starting DynamicServingCertificateController I1005 16:50:11.280746 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController I1005 16:50:11.281583 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I1005 16:50:11.282313 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kube-scheduler [65ad5a93949b] <== I1005 16:49:00.774459 1 registry.go:173] Registering SelectorSpread plugin I1005 16:49:00.774506 1 registry.go:173] Registering SelectorSpread plugin I1005 16:49:01.393279 1 serving.go:331] Generated self-signed cert in-memory W1005 16:49:05.240237 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1005 16:49:05.240278 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1005 16:49:05.240292 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous. W1005 16:49:05.240296 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1005 16:49:05.260564 1 registry.go:173] Registering SelectorSpread plugin I1005 16:49:05.260598 1 registry.go:173] Registering SelectorSpread plugin I1005 16:49:05.263392 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I1005 16:49:05.263662 1 tlsconfig.go:240] Starting DynamicServingCertificateController I1005 16:49:05.276922 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1005 16:49:05.276971 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file E1005 16:49:05.277340 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1005 16:49:05.277664 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1005 16:49:05.277786 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1005 16:49:05.277904 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1005 16:49:05.277351 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1005 16:49:05.277394 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1005 16:49:05.277461 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1005 16:49:05.278030 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1005 16:49:05.278057 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1005 16:49:05.278224 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1005 16:49:05.278340 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1005 16:49:05.278593 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1005 16:49:05.278960 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1005 16:49:06.151283 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1005 16:49:06.242968 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1005 16:49:06.274123 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1005 16:49:06.304703 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1005 16:49:06.346704 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1005 16:49:06.358289 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1005 16:49:06.378590 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1005 16:49:06.485327 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1005 16:49:06.486150 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1005 16:49:06.517084 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I1005 16:49:09.377176 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== -- Logs begin at Mon 2020-10-05 16:49:59 UTC, end at Mon 2020-10-05 16:58:10 UTC. -- Oct 05 16:50:11 minikube kubelet[737]: I1005 16:50:11.158757 737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-52lzx" (UniqueName: "kubernetes.io/secret/ca248a56-b86f-48d5-a922-ba034acc23ae-coredns-token-52lzx") pod "coredns-f9fd979d6-5bhwq" (UID: "ca248a56-b86f-48d5-a922-ba034acc23ae") Oct 05 16:50:11 minikube kubelet[737]: I1005 16:50:11.158768 737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/10affcb7-5ba8-4781-bf42-b6d085d58017-lib-modules") pod "kube-proxy-pr669" (UID: "10affcb7-5ba8-4781-bf42-b6d085d58017") Oct 05 16:50:11 minikube kubelet[737]: I1005 16:50:11.158777 737 reconciler.go:157] Reconciler: start to sync state Oct 05 16:50:11 minikube kubelet[737]: I1005 16:50:11.178478 737 kubelet_node_status.go:108] Node minikube was previously registered Oct 05 16:50:11 minikube kubelet[737]: I1005 16:50:11.178701 737 kubelet_node_status.go:73] Successfully registered node minikube Oct 05 16:50:11 minikube kubelet[737]: W1005 16:50:11.904374 737 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-5c58dc7db8-bl5hm through plugin: invalid network status for Oct 05 16:50:11 minikube kubelet[737]: W1005 16:50:11.921913 737 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-5bhwq through plugin: invalid network status for Oct 05 16:50:12 minikube kubelet[737]: W1005 16:50:12.085025 737 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-5bhwq through plugin: invalid network status for Oct 05 16:50:12 minikube kubelet[737]: W1005 16:50:12.120704 737 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-5c58dc7db8-bl5hm through plugin: invalid network status for Oct 05 16:50:12 minikube kubelet[737]: E1005 16:50:12.269027 737 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Oct 05 16:50:12 minikube kubelet[737]: E1005 16:50:12.269574 737 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/10affcb7-5ba8-4781-bf42-b6d085d58017-kube-proxy podName:10affcb7-5ba8-4781-bf42-b6d085d58017 nodeName:}" failed. No retries permitted until 2020-10-05 16:50:12.76954889 +0000 UTC m=+7.819001360 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/10affcb7-5ba8-4781-bf42-b6d085d58017-kube-proxy\") pod \"kube-proxy-pr669\" (UID: \"10affcb7-5ba8-4781-bf42-b6d085d58017\") : failed to sync configmap cache: timed out waiting for the condition" Oct 05 16:50:12 minikube kubelet[737]: E1005 16:50:12.269077 737 secret.go:195] Couldn't get secret kube-system/storage-provisioner-token-6jk6m: failed to sync secret cache: timed out waiting for the condition Oct 05 16:50:12 minikube kubelet[737]: E1005 16:50:12.269890 737 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/9e200136-8297-45a4-83e4-f99873d02c1e-storage-provisioner-token-6jk6m podName:9e200136-8297-45a4-83e4-f99873d02c1e nodeName:}" failed. No retries permitted until 2020-10-05 16:50:12.769874035 +0000 UTC m=+7.819326492 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-6jk6m\" (UniqueName: \"kubernetes.io/secret/9e200136-8297-45a4-83e4-f99873d02c1e-storage-provisioner-token-6jk6m\") pod \"storage-provisioner\" (UID: \"9e200136-8297-45a4-83e4-f99873d02c1e\") : failed to sync secret cache: timed out waiting for the condition" Oct 05 16:50:13 minikube kubelet[737]: W1005 16:50:13.165096 737 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-5c58dc7db8-bl5hm through plugin: invalid network status for Oct 05 16:50:13 minikube kubelet[737]: W1005 16:50:13.195960 737 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-5bhwq through plugin: invalid network status for Oct 05 16:50:13 minikube kubelet[737]: W1005 16:50:13.287680 737 pod_container_deletor.go:79] Container "fa7880d70d9e346ee5304d8e653e3176f4d2c8094920c3527d2ef1e1ff095a2a" not found in pod's containers Oct 05 16:50:14 minikube kubelet[737]: I1005 16:50:14.302842 737 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 85a8da64b8336e23c7eb1dce471a3b7e5129a66df33cf0ca88c1bbfb5ff6605d Oct 05 16:50:14 minikube kubelet[737]: I1005 16:50:14.303146 737 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 47f5ca35bed8f1b5241edd4fecbfcb7f9656080df306bb701b57c1154fe756e1 Oct 05 16:50:14 minikube kubelet[737]: E1005 16:50:14.303383 737 pod_workers.go:191] Error syncing pod 9e200136-8297-45a4-83e4-f99873d02c1e ("storage-provisioner_kube-system(9e200136-8297-45a4-83e4-f99873d02c1e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9e200136-8297-45a4-83e4-f99873d02c1e)" Oct 05 16:50:15 minikube kubelet[737]: I1005 16:50:15.331863 737 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 47f5ca35bed8f1b5241edd4fecbfcb7f9656080df306bb701b57c1154fe756e1 Oct 05 16:50:15 minikube kubelet[737]: E1005 16:50:15.332477 737 pod_workers.go:191] Error syncing pod 9e200136-8297-45a4-83e4-f99873d02c1e ("storage-provisioner_kube-system(9e200136-8297-45a4-83e4-f99873d02c1e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9e200136-8297-45a4-83e4-f99873d02c1e)" Oct 05 16:50:15 minikube kubelet[737]: E1005 16:50:15.730452 737 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 05 16:50:15 minikube kubelet[737]: E1005 16:50:15.730693 737 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 05 16:50:25 minikube kubelet[737]: E1005 16:50:25.741326 737 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 05 16:50:25 minikube kubelet[737]: E1005 16:50:25.741682 737 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 05 16:50:27 minikube kubelet[737]: I1005 16:50:27.618670 737 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 47f5ca35bed8f1b5241edd4fecbfcb7f9656080df306bb701b57c1154fe756e1 Oct 05 16:50:35 minikube kubelet[737]: E1005 16:50:35.755131 737 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 05 16:50:35 minikube kubelet[737]: E1005 16:50:35.755192 737 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 05 16:50:45 minikube kubelet[737]: E1005 16:50:45.778692 737 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 05 16:50:45 minikube kubelet[737]: E1005 16:50:45.779102 737 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 05 16:50:55 minikube kubelet[737]: E1005 16:50:55.797947 737 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 05 16:50:55 minikube kubelet[737]: E1005 16:50:55.798363 737 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 05 16:52:40 minikube kubelet[737]: I1005 16:52:40.182750 737 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 05 16:52:40 minikube kubelet[737]: I1005 16:52:40.318295 737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "gcp-creds" (UniqueName: "kubernetes.io/host-path/dac75025-0ddd-4fb8-aaac-7fc4c6a94fcb-gcp-creds") pod "gcp-auth-74f9689fd7-lfln7" (UID: "dac75025-0ddd-4fb8-aaac-7fc4c6a94fcb") Oct 05 16:52:40 minikube kubelet[737]: I1005 16:52:40.318471 737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "gcp-project" (UniqueName: "kubernetes.io/host-path/dac75025-0ddd-4fb8-aaac-7fc4c6a94fcb-gcp-project") pod "gcp-auth-74f9689fd7-lfln7" (UID: "dac75025-0ddd-4fb8-aaac-7fc4c6a94fcb") Oct 05 16:52:40 minikube kubelet[737]: I1005 16:52:40.318578 737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-t66wm" (UniqueName: "kubernetes.io/secret/dac75025-0ddd-4fb8-aaac-7fc4c6a94fcb-default-token-t66wm") pod "gcp-auth-74f9689fd7-lfln7" (UID: "dac75025-0ddd-4fb8-aaac-7fc4c6a94fcb") Oct 05 16:52:40 minikube kubelet[737]: I1005 16:52:40.318598 737 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/dac75025-0ddd-4fb8-aaac-7fc4c6a94fcb-webhook-certs") pod "gcp-auth-74f9689fd7-lfln7" (UID: "dac75025-0ddd-4fb8-aaac-7fc4c6a94fcb") Oct 05 16:52:40 minikube kubelet[737]: W1005 16:52:40.810799 737 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-74f9689fd7-lfln7 through plugin: invalid network status for Oct 05 16:52:40 minikube kubelet[737]: W1005 16:52:40.939393 737 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-74f9689fd7-lfln7 through plugin: invalid network status for Oct 05 16:52:43 minikube kubelet[737]: W1005 16:52:43.971199 737 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for gcp-auth/gcp-auth-74f9689fd7-lfln7 through plugin: invalid network status for Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.000656 737 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 40debb564c721b383ba542764c224139284753e7b020cfcb415e6d8ef2ce49e0 Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.018073 737 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ea02ccacccbca561167cddee883b4397995fa4b758144d61566453f0b802530f Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.027135 737 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 40debb564c721b383ba542764c224139284753e7b020cfcb415e6d8ef2ce49e0 Oct 05 16:52:45 minikube kubelet[737]: E1005 16:52:45.027754 737 remote_runtime.go:329] ContainerStatus "40debb564c721b383ba542764c224139284753e7b020cfcb415e6d8ef2ce49e0" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 40debb564c721b383ba542764c224139284753e7b020cfcb415e6d8ef2ce49e0 Oct 05 16:52:45 minikube kubelet[737]: W1005 16:52:45.027838 737 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={docker 40debb564c721b383ba542764c224139284753e7b020cfcb415e6d8ef2ce49e0}): failed to get container status "40debb564c721b383ba542764c224139284753e7b020cfcb415e6d8ef2ce49e0": rpc error: code = Unknown desc = Error: No such container: 40debb564c721b383ba542764c224139284753e7b020cfcb415e6d8ef2ce49e0 Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.027886 737 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ea02ccacccbca561167cddee883b4397995fa4b758144d61566453f0b802530f Oct 05 16:52:45 minikube kubelet[737]: E1005 16:52:45.028363 737 remote_runtime.go:329] ContainerStatus "ea02ccacccbca561167cddee883b4397995fa4b758144d61566453f0b802530f" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: ea02ccacccbca561167cddee883b4397995fa4b758144d61566453f0b802530f Oct 05 16:52:45 minikube kubelet[737]: W1005 16:52:45.028398 737 pod_container_deletor.go:52] [pod_container_deletor] DeleteContainer returned error for (id={docker ea02ccacccbca561167cddee883b4397995fa4b758144d61566453f0b802530f}): failed to get container status "ea02ccacccbca561167cddee883b4397995fa4b758144d61566453f0b802530f": rpc error: code = Unknown desc = Error: No such container: ea02ccacccbca561167cddee883b4397995fa4b758144d61566453f0b802530f Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.138136 737 reconciler.go:196] operationExecutor.UnmountVolume started for volume "gcp-project" (UniqueName: "kubernetes.io/host-path/37804e91-41a6-4ba1-b104-33a6a227fb53-gcp-project") pod "37804e91-41a6-4ba1-b104-33a6a227fb53" (UID: "37804e91-41a6-4ba1-b104-33a6a227fb53") Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.138302 737 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/37804e91-41a6-4ba1-b104-33a6a227fb53-webhook-certs") pod "37804e91-41a6-4ba1-b104-33a6a227fb53" (UID: "37804e91-41a6-4ba1-b104-33a6a227fb53") Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.138330 737 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-t66wm" (UniqueName: "kubernetes.io/secret/37804e91-41a6-4ba1-b104-33a6a227fb53-default-token-t66wm") pod "37804e91-41a6-4ba1-b104-33a6a227fb53" (UID: "37804e91-41a6-4ba1-b104-33a6a227fb53") Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.138332 737 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37804e91-41a6-4ba1-b104-33a6a227fb53-gcp-project" (OuterVolumeSpecName: "gcp-project") pod "37804e91-41a6-4ba1-b104-33a6a227fb53" (UID: "37804e91-41a6-4ba1-b104-33a6a227fb53"). InnerVolumeSpecName "gcp-project". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.148514 737 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37804e91-41a6-4ba1-b104-33a6a227fb53-default-token-t66wm" (OuterVolumeSpecName: "default-token-t66wm") pod "37804e91-41a6-4ba1-b104-33a6a227fb53" (UID: "37804e91-41a6-4ba1-b104-33a6a227fb53"). InnerVolumeSpecName "default-token-t66wm". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.152725 737 operation_generator.go:788] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37804e91-41a6-4ba1-b104-33a6a227fb53-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "37804e91-41a6-4ba1-b104-33a6a227fb53" (UID: "37804e91-41a6-4ba1-b104-33a6a227fb53"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.238974 737 reconciler.go:319] Volume detached for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/37804e91-41a6-4ba1-b104-33a6a227fb53-webhook-certs") on node "minikube" DevicePath "" Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.239016 737 reconciler.go:319] Volume detached for volume "default-token-t66wm" (UniqueName: "kubernetes.io/secret/37804e91-41a6-4ba1-b104-33a6a227fb53-default-token-t66wm") on node "minikube" DevicePath "" Oct 05 16:52:45 minikube kubelet[737]: I1005 16:52:45.239029 737 reconciler.go:319] Volume detached for volume "gcp-project" (UniqueName: "kubernetes.io/host-path/37804e91-41a6-4ba1-b104-33a6a227fb53-gcp-project") on node "minikube" DevicePath "" Oct 05 16:53:05 minikube kubelet[737]: E1005 16:53:05.659011 737 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/bd02d6b56fc2ec89aecdc631c9cfd5d0e80daab07fde32ddd8e85045ba8c9218/diff" to get inode usage: stat /var/lib/docker/overlay2/bd02d6b56fc2ec89aecdc631c9cfd5d0e80daab07fde32ddd8e85045ba8c9218/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/40debb564c721b383ba542764c224139284753e7b020cfcb415e6d8ef2ce49e0" to get inode usage: stat /var/lib/docker/containers/40debb564c721b383ba542764c224139284753e7b020cfcb415e6d8ef2ce49e0: no such file or directory Oct 05 16:55:05 minikube kubelet[737]: W1005 16:55:05.633495 737 sysinfo.go:203] Nodes topology is not available, providing CPU topology Oct 05 16:55:05 minikube kubelet[737]: W1005 16:55:05.633694 737 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory ==> storage-provisioner [3e5da98dae21] <== I1005 16:50:27.802834 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I1005 16:50:45.216472 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I1005 16:50:45.217499 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_279c6a22-9e22-48c9-9460-983dfd43d52f! I1005 16:50:45.217561 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b786f64-8199-4c42-9e78-0ebd525dfd3d", APIVersion:"v1", ResourceVersion:"614", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_279c6a22-9e22-48c9-9460-983dfd43d52f became leader I1005 16:50:45.317975 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_279c6a22-9e22-48c9-9460-983dfd43d52f! ==> storage-provisioner [47f5ca35bed8] <== F1005 16:50:14.048624 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": x509: certificate signed by unknown authority ```
matthewmichihara commented 3 years ago

@sharifelgamal were you able to reproduce the original issue?

sharifelgamal commented 3 years ago

Yeah the original issue was reliably reproducible. The newest version of the webhook image should never apply anything to any pod in the kube-system namespace, so I'm wondering if you're getting that image or not.

Could you make sure you have the newest gcp-auth-webhook image deployed?

kubectl get pods -n gcp-auth should give you three pods, could you run kubectl describe pod <pod-name> -n gcp-auth on the first of the 3 pods listed?

matthewmichihara commented 3 years ago

yes, here's that output:

$ kubectl get pods -n gcp-auth
NAME                          READY   STATUS      RESTARTS   AGE
gcp-auth-74f9689fd7-lfln7     1/1     Running     0          31m
gcp-auth-certs-create-tbxpd   0/1     Completed   0          34m
gcp-auth-certs-patch-crn8z    0/1     Completed   0          34m
$ kubectl describe pod gcp-auth-74f9689fd7-lfln7 -n gcp-auth
Name:         gcp-auth-74f9689fd7-lfln7
Namespace:    gcp-auth
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Mon, 05 Oct 2020 09:52:40 -0700
Labels:       app=gcp-auth
              kubernetes.io/minikube-addons=gcp-auth
              pod-template-hash=74f9689fd7
Annotations:  <none>
Status:       Running
IP:           172.17.0.4
IPs:
  IP:           172.17.0.4
Controlled By:  ReplicaSet/gcp-auth-74f9689fd7
Containers:
  gcp-auth:
    Container ID:   docker://c9af8a135eba8891872322258f5d7aa6f2290f4d3a09b4984d5ef1b261eec0d0
    Image:          gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.3
    Image ID:       docker-pullable://gcr.io/k8s-minikube/gcp-auth-webhook@sha256:af4ba05354a42a4e93ad27209c64eba0e004e3265345fc5267d78f89a7bffda8
    Port:           8443/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 05 Oct 2020 09:52:43 -0700
    Ready:          True
    Restart Count:  0
    Environment:
      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
      PROJECT_ID:                      chelseamarket
      GCP_PROJECT:                     chelseamarket
      GCLOUD_PROJECT:                  chelseamarket
      GOOGLE_CLOUD_PROJECT:            chelseamarket
      CLOUDSDK_CORE_PROJECT:           chelseamarket
    Mounts:
      /etc/webhook/certs from webhook-certs (ro)
      /google-app-creds.json from gcp-creds (ro)
      /var/lib/minikube/google_cloud_project from gcp-project (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-t66wm (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  webhook-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  gcp-auth-certs
    Optional:    false
  gcp-project:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/minikube/google_cloud_project
    HostPathType:  File
  default-token-t66wm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-t66wm
    Optional:    false
  gcp-creds:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/minikube/google_application_credentials.json
    HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  31m   default-scheduler  Successfully assigned gcp-auth/gcp-auth-74f9689fd7-lfln7 to minikube
  Normal  Pulling    31m   kubelet            Pulling image "gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.3"
  Normal  Pulled     31m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.3" in 2.323861663s
  Normal  Created    31m   kubelet            Created container gcp-auth
  Normal  Started    31m   kubelet            Started container gcp-auth
$ kubectl describe pod gcp-auth-certs-create-tbxpd -n gcp-auth
Name:         gcp-auth-certs-create-tbxpd
Namespace:    gcp-auth
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Mon, 05 Oct 2020 09:49:23 -0700
Labels:       controller-uid=d09d9c87-b723-443c-b22b-d613dd144d03
              job-name=gcp-auth-certs-create
Annotations:  <none>
Status:       Succeeded
IP:           172.17.0.3
IPs:
  IP:           172.17.0.3
Controlled By:  Job/gcp-auth-certs-create
Containers:
  create:
    Container ID:  docker://0d8db80378ac7a2634cf289be715a03201d92da9d49a7255afae328621c5cb19
    Image:         jettech/kube-webhook-certgen:v1.3.0
    Image ID:      docker-pullable://jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689
    Port:          <none>
    Host Port:     <none>
    Args:
      create
      --host=gcp-auth,gcp-auth.gcp-auth,gcp-auth.gcp-auth.svc
      --namespace=gcp-auth
      --secret-name=gcp-auth-certs
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 05 Oct 2020 09:49:28 -0700
      Finished:     Mon, 05 Oct 2020 09:49:28 -0700
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from minikube-gcp-auth-certs-token-8rz9b (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  minikube-gcp-auth-certs-token-8rz9b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  minikube-gcp-auth-certs-token-8rz9b
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age   From               Message
  ----    ------          ----  ----               -------
  Normal  Scheduled       35m   default-scheduler  Successfully assigned gcp-auth/gcp-auth-certs-create-tbxpd to minikube
  Normal  Pulling         35m   kubelet            Pulling image "jettech/kube-webhook-certgen:v1.3.0"
  Normal  Pulled          35m   kubelet            Successfully pulled image "jettech/kube-webhook-certgen:v1.3.0" in 5.05384679s
  Normal  Created         35m   kubelet            Created container create
  Normal  Started         35m   kubelet            Started container create
  Normal  SandboxChanged  35m   kubelet            Pod sandbox changed, it will be killed and re-created.
$ kubectl describe pod gcp-auth-certs-patch-crn8z -n gcp-auth
Name:         gcp-auth-certs-patch-crn8z
Namespace:    gcp-auth
Priority:     0
Node:         minikube/192.168.49.2
Start Time:   Mon, 05 Oct 2020 09:49:23 -0700
Labels:       controller-uid=badb6525-ff4a-4fcc-ba42-407b5085d794
              job-name=gcp-auth-certs-patch
Annotations:  <none>
Status:       Succeeded
IP:           172.17.0.4
IPs:
  IP:           172.17.0.4
Controlled By:  Job/gcp-auth-certs-patch
Containers:
  patch:
    Container ID:  docker://feb4ab880c44298bc5733408dc34b4da960150ef397f8ad76a21b76719e12743
    Image:         jettech/kube-webhook-certgen:v1.3.0
    Image ID:      docker-pullable://jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689
    Port:          <none>
    Host Port:     <none>
    Args:
      patch
      --secret-name=gcp-auth-certs
      --namespace=gcp-auth
      --patch-validating=false
      --webhook-name=gcp-auth-webhook-cfg
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 05 Oct 2020 09:49:30 -0700
      Finished:     Mon, 05 Oct 2020 09:49:30 -0700
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from minikube-gcp-auth-certs-token-8rz9b (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  minikube-gcp-auth-certs-token-8rz9b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  minikube-gcp-auth-certs-token-8rz9b
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age   From               Message
  ----    ------          ----  ----               -------
  Normal  Scheduled       35m   default-scheduler  Successfully assigned gcp-auth/gcp-auth-certs-patch-crn8z to minikube
  Normal  Pulling         35m   kubelet            Pulling image "jettech/kube-webhook-certgen:v1.3.0"
  Normal  Pulled          35m   kubelet            Successfully pulled image "jettech/kube-webhook-certgen:v1.3.0" in 6.839498595s
  Normal  Created         35m   kubelet            Created container patch
  Normal  Started         35m   kubelet            Started container patch
  Normal  SandboxChanged  35m   kubelet            Pod sandbox changed, it will be killed and re-created.
matthewmichihara commented 3 years ago

I tried again and .. it worked?

$ ./minikube version
minikube version: v1.13.1
commit: bc3db0d76816d4a8068b9a7796def3c7572cb595
$ ./minikube delete --all --purge
🔥  Successfully deleted all profiles
💀  Successfully purged minikube directory located at - [/Users/michihara/.minikube]
$ docker system prune --all
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all images without at least one container associated to them
  - all build cache

Are you sure you want to continue? [y/N] y
Total reclaimed space: 0B

// restarted docker desktop here

$ ./minikube start
😄  minikube v1.13.1 on Darwin 10.15.7
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.19.2 preload ...
    > preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 486.36 MiB
🔥  Creating docker container (CPUs=2, Memory=3892MB) ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" by default
$ ./minikube addons enable gcp-auth
🔎  Verifying gcp-auth addon...
📌  Your GCP credentials will now be mounted into every pod created in the minikube cluster.
📌  If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
🌟  The 'gcp-auth' addon is enabled
matthewmichihara commented 3 years ago

Did additional stop / start and things look like they are working. Not sure what was causing the original issue - perhaps the restart of Docker Desktop changed something?

$ ./minikube stop
✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
🛑  1 nodes stopped.
$ ./minikube start
😄  minikube v1.13.1 on Darwin 10.15.7
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🔎  Verifying gcp-auth addon...
📌  Your GCP credentials will now be mounted into every pod created in the minikube cluster.
📌  If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
🌟  Enabled addons: storage-provisioner, default-storageclass, gcp-auth
🏄  Done! kubectl is now configured to use "minikube" by default
matthewmichihara commented 3 years ago

Downloaded a fresh minikube build from master, and gave it another try:

$ curl -Lo minikube https://storage.googleapis.com/minikube-builds/master/minikube-darwin-amd64 && chmod +x minikube
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 55.6M  100 55.6M    0     0  23.1M      0  0:00:02  0:00:02 --:--:-- 23.1M
$ ./minikube version
minikube version: v1.13.1
commit: aae778430915035086fa26a69ee74d29babebbb4
$ ./minikube start
😄  minikube v1.13.1 on Darwin 10.15.7
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🏃  Updating the running docker "minikube" container ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
❗  Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
serviceaccount/storage-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
endpoints/k8s.io-minikube-hostpath unchanged

stderr:
Error from server (InternalError): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"gcp-auth-skip-secret\":\"true\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v3\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n"},"labels":{"gcp-auth-skip-secret":"true"}},"spec":{"$setElementOrder/containers":[{"name":"storage-provisioner"}],"$setElementOrder/volumes":[{"name":"tmp"}],"containers":[{"$setElementOrder/volumeMounts":[{"mountPath":"/tmp"}],"name":"storage-provisioner"}]}}
to:
Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
for: "/etc/kubernetes/addons/storage-provisioner.yaml": Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.114.51:443: connect: connection refused
]
🔎  Verifying gcp-auth addon...
❗  Enabling 'gcp-auth' returned an error: running callbacks: [verifying gcp-auth addon pods : timed out waiting for the condition: timed out waiting for the condition]
🌟  Enabled addons: default-storageclass
🏄  Done! kubectl is now configured to use "minikube" by default

I then stopped minikube, did some cleaning, and tried again:

$ ./minikube stop
✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
./minikube delete --all --purge
🛑  1 nodes stopped.
$ ./minikube delete --all --purge
🔥  Deleting "minikube" in docker ...
🔥  Removing /Users/michihara/.minikube/machines/minikube ...
💀  Removed all traces of the "minikube" cluster.
🔥  Successfully deleted all profiles
💀  Successfully purged minikube directory located at - [/Users/michihara/.minikube]
$ docker system prune --all
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all images without at least one container associated to them
  - all build cache

Are you sure you want to continue? [y/N] y
Deleted Containers:
a2b3e0965d9a4d29a7a295df472872604e4eb6d9867b92e4da1e5bd835dfe6f0
0be7db88bfbe0d076763c03597188cc6fa8b6a61941b0fadddbc169947b1538c
08201d6cc3a1d07c36083cb5789256e6bef19bc1c6c0cf464893178150784623

Deleted Images:
untagged: docker/desktop-kubernetes:kubernetes-v1.16.5-cni-v0.7.5-critools-v1.15.0
untagged: docker/desktop-kubernetes@sha256:023b5fbc1f50ef1ba0c6f1c4c994d7242ccaab7f6f3ddf7934ce0517049b9708
deleted: sha256:a86647f0b376be9a76eafa9d3bec4e30e3b3aeadce9d50326e07e748918537ca
deleted: sha256:20ba2a46c5a5e9f01eed477c5da2e7021fdfee85cde6bb220aea6728a60ca00a
untagged: docker/kube-compose-installer:v0.4.25-alpha1
untagged: docker/kube-compose-installer@sha256:b82322c40b240a417fa1ec1cd8030b5a65a3693aafc37a970db07424333d8cce
deleted: sha256:2a71ac5a1359656a5a1f2ac4a3be95238bdba9ac52c6ad062a245d5fde1eae52
deleted: sha256:59cba1c3adf39b06967b5de99c33aa17a67228df7ab39ee6d037d9555ef6e68e
untagged: gcr.io/k8s-minikube/kicbase:v0.0.13-snapshot1
untagged: gcr.io/k8s-minikube/kicbase@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f
deleted: sha256:90f1294ff9acdd24b50195f33fc48f1f4cc50786328bf6cfc095270df92f1c36
deleted: sha256:f1b5bf1489776f0164d979016b523b648d12c1257bfb1500563e052e9a286951
deleted: sha256:130d60ac7a2abcf00058e0d198d7ae3100f64c82770156ea373b120bd57bfdc0
deleted: sha256:ae9fa1c9c0b084125a2855454956bed0985e40ab2ed4a7508e1202ffdc2d4e0b
deleted: sha256:0674401420c89b94a6bb585936d62262cb4138c4c89d0a9026f5e61923985fd8
deleted: sha256:43c3f7071a445154468a5e51f86f9764bf6913f2e94d3f5f8f83dd053f7cee9e
deleted: sha256:0ee6a8dd60f8b4d44c94ebbf57d9ab5c809bbdb924ea83e76a72ad76ea7e0882
deleted: sha256:6624cf78734b1e94e2312bf56322dd6d2b841da1da61009799b1020f1fd93ff5
deleted: sha256:2421d68d542258cccbc66e5d3712535aff9a3e2b0c82a2597839a6ceebb30bf2
deleted: sha256:9a06a306246ba43dde0766d80dca0df27c2d34e78fef1cec9a2c983f3115e190
deleted: sha256:a340aa55b5b1ea1bdd9849477e850cd01b224d80cf0a834bcf86f0ef12610f60
deleted: sha256:250600bbfb9a7de4d79a3ec67776b51d40a1d196ef34965bec21f466be27d766
deleted: sha256:26a7d2f77f7027fec1ec07a98c8cb92818d1e61688de8d0436d780011f7b7b9e
deleted: sha256:ffaece20c2bef05335f9d58980cffb6d3d72469b84b06097bc0097c571886628
deleted: sha256:be4515c372095d88494b48c16a4d735e15cffd7b7bb31912147a87a04b206729
deleted: sha256:e76ef2de99464431aa65b3b5371ad74c85f301ecde5fe3dfbb704dc350fc767e
deleted: sha256:2d9ba679dcf587e9b5b520177f084d6cb6a4d3e95bd550df5a0c8bdf7512d615
deleted: sha256:6ad41196a22310cd591281ca6025869e1ba639e3c3d57133c2571c88ccf8a746
deleted: sha256:d61bbd529b5beab1623ab4703aa0baf06a7a012e8d6ed78b5c809aedcb60a2e4
deleted: sha256:58dfea8eb02315c64b359fed660832f277db5692492ca5a696ae15296fd08b99
deleted: sha256:7aee877aa7d1763ef764b43cb864ee522c5c4ac34ff5f9750ee466d399420982
deleted: sha256:f814e44948c4dbfbf95b480cecbcc948f7fb0e5eb37fa8dcfc9441c669bc8a5b
deleted: sha256:214f32351cbb29bbb99944130af05fbfebf7915cef600e2ed930e96d19f23643
deleted: sha256:c876a46df158e8207a282784ff347a9ee9c0551add9e532c3131da849b989059
deleted: sha256:b666e4b2b2d98d0a1b8ffcc06b9498ec53959b3fa29212d21fe2d1a85e032a05
deleted: sha256:422d4b7c46b6ffc30c36d6e8c23e2a88e03fb7248ac8125e42c9e4225d4279bf
deleted: sha256:279e836b58d9996b5715e82a97b024563f2b175e86a53176846684f0717661c3
deleted: sha256:39865913f677c50ea236b68d81560d8fefe491661ce6e668fd331b4b680b1d47
deleted: sha256:cac81188485e011e56459f1d9fc9936625a1b62cacdb4fcd3526e5f32e280387
deleted: sha256:7789f1a3d4e9258fbe5469a8d657deb6aba168d86967063e9b80ac3e1154333f

Total reclaimed space: 1.115GB

// Restarted docker desktop here

$ ./minikube status
🤷  There is no local cluster named "minikube"
👉  To fix this, run: "minikube start"
$ ./minikube start
😄  minikube v1.13.1 on Darwin 10.15.7
✨  Automatically selected the docker driver
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.19.2 preload ...
    > preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4: 486.36 MiB
🔥  Creating docker container (CPUs=2, Memory=3892MB) ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" by default
$ ./minikube addons enable gcp-auth
🔎  Verifying gcp-auth addon...
📌  Your GCP credentials will now be mounted into every pod created in the minikube cluster.
📌  If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
🌟  The 'gcp-auth' addon is enabled
$ ./minikube addons disable gcp-auth
🌑  "The 'gcp-auth' addon is disabled
$ ./minikube stop
✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
🛑  1 nodes stopped.
$ ./minikube start --addons gcp-auth
😄  minikube v1.13.1 on Darwin 10.15.7
✨  Using the docker driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🔄  Restarting existing docker container for "minikube" ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🔎  Verifying gcp-auth addon...
❗  Enabling 'gcp-auth' returned an error: running callbacks: [verifying gcp-auth addon pods : timed out waiting for the condition: timed out waiting for the condition]
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" by default

Minikube seemed to get stuck again and error at:

🔎  Verifying gcp-auth addon...
❗  Enabling 'gcp-auth' returned an error: running callbacks: [verifying gcp-auth addon pods : timed out waiting for the condition: timed out waiting for the condition]

Some kubectl output:

$ kubectl get pods -A
NAMESPACE     NAME                               READY   STATUS              RESTARTS   AGE
gcp-auth      gcp-auth-5ff8987f65-qh2l5          0/1     ContainerCreating   0          8m42s
kube-system   coredns-f9fd979d6-p5n27            1/1     Running             1          10m
kube-system   etcd-minikube                      1/1     Running             1          11m
kube-system   kube-apiserver-minikube            1/1     Running             1          11m
kube-system   kube-controller-manager-minikube   1/1     Running             1          11m
kube-system   kube-proxy-9frg4                   1/1     Running             1          10m
kube-system   kube-scheduler-minikube            1/1     Running             1          11m
kube-system   storage-provisioner                1/1     Running             2          11m

$ kubectl describe deploy gcp-auth -n gcp-auth
Name:                   gcp-auth
Namespace:              gcp-auth
CreationTimestamp:      Tue, 06 Oct 2020 11:02:14 -0700
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=gcp-auth
Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=gcp-auth
           gcp-auth-skip-secret=true
           kubernetes.io/minikube-addons=gcp-auth
  Containers:
   gcp-auth:
    Image:        gcr.io/k8s-minikube/gcp-auth-webhook:v0.0.3
    Port:         8443/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /etc/webhook/certs from webhook-certs (ro)
      /var/lib/minikube/google_cloud_project from gcp-project (ro)
  Volumes:
   webhook-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  gcp-auth-certs
    Optional:    false
   gcp-project:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/minikube/google_cloud_project
    HostPathType:  File
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    True    ReplicaSetUpdated
OldReplicaSets:  <none>
NewReplicaSet:   gcp-auth-5ff8987f65 (1/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  9m42s  deployment-controller  Scaled up replica set gcp-auth-5ff8987f65 to 1

./minikube logs output:

``` ==> Docker <== -- Logs begin at Tue 2020-10-06 18:01:53 UTC, end at Tue 2020-10-06 18:09:15 UTC. -- Oct 06 18:01:53 minikube systemd[1]: Starting Docker Application Container Engine... Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.247301528Z" level=info msg="Starting up" Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.250777426Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.250813636Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.250833572Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.250857311Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.253460518Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.253682253Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.253815880Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.253912541Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.267560649Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.284357670Z" level=info msg="Loading containers: start." Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.448803670Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.513601622Z" level=info msg="Loading containers: done." Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.543550721Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.543669187Z" level=info msg="Daemon has completed initialization" Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.572389389Z" level=info msg="API listen on /var/run/docker.sock" Oct 06 18:01:53 minikube systemd[1]: Started Docker Application Container Engine. Oct 06 18:01:53 minikube dockerd[162]: time="2020-10-06T18:01:53.572393421Z" level=info msg="API listen on [::]:2376" Oct 06 18:02:12 minikube dockerd[162]: time="2020-10-06T18:02:12.431785132Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 4799383a11f21 bad58561c4be7 6 minutes ago Running storage-provisioner 2 ef0a9f5a6af21 6e848e5b7285b bfe3a36ebd252 7 minutes ago Running coredns 1 1012505b72ef4 7aebd6535d847 d373dd5a8593a 7 minutes ago Running kube-proxy 1 78cb183a3352b b3b1dd662e238 bad58561c4be7 7 minutes ago Exited storage-provisioner 1 ef0a9f5a6af21 a6a3b9a242bd1 8603821e1a7a5 7 minutes ago Running kube-controller-manager 1 ae4b4362dabb3 5649bdd088e27 0369cf4303ffd 7 minutes ago Running etcd 1 d7d56b7268328 280ff2f3a128f 607331163122e 7 minutes ago Running kube-apiserver 1 029134aff52f9 f774b5f85c959 2f32d66b884f8 7 minutes ago Running kube-scheduler 1 6d4c90de86718 a713371285ecc bfe3a36ebd252 9 minutes ago Exited coredns 0 1698c894e4fb9 d5cedb2219ee4 d373dd5a8593a 9 minutes ago Exited kube-proxy 0 e37422c06d5d1 bfaf6018126e7 0369cf4303ffd 9 minutes ago Exited etcd 0 7ef8e7eec4da3 e81e94e22147d 607331163122e 9 minutes ago Exited kube-apiserver 0 963ea4aafe3c9 6c6d1921ad64a 2f32d66b884f8 9 minutes ago Exited kube-scheduler 0 5e4054f691302 9b63b41e708c3 8603821e1a7a5 9 minutes ago Exited kube-controller-manager 0 102cd73b0b88c ==> coredns [6e848e5b7285] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d E1006 18:02:13.047573 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E1006 18:02:13.047627 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": x509: certificate signed by unknown authority E1006 18:02:13.047879 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": x509: certificate signed by unknown authority ==> coredns [a713371285ec] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=aae778430915035086fa26a69ee74d29babebbb4 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_10_06T10_59_53_0700 minikube.k8s.io/version=v1.13.1 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Tue, 06 Oct 2020 17:59:50 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Tue, 06 Oct 2020 18:09:10 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Tue, 06 Oct 2020 18:07:11 +0000 Tue, 06 Oct 2020 17:59:46 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 06 Oct 2020 18:07:11 +0000 Tue, 06 Oct 2020 17:59:46 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Tue, 06 Oct 2020 18:07:11 +0000 Tue, 06 Oct 2020 17:59:46 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Tue, 06 Oct 2020 18:07:11 +0000 Tue, 06 Oct 2020 18:00:04 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 4 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4035072Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4035072Ki pods: 110 System Info: Machine ID: d7e921419f074a51814b43122b8e004e System UUID: f4305598-68cd-4769-8851-9971e1c7554c Boot ID: d8fb31c1-1ce8-4619-80c3-4f110e2ad1b4 Kernel Version: 4.19.76-linuxkit OS Image: Ubuntu 20.04 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.8 Kubelet Version: v1.19.2 Kube-Proxy Version: v1.19.2 Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- gcp-auth gcp-auth-5ff8987f65-qh2l5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m kube-system coredns-f9fd979d6-p5n27 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 9m17s kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m22s kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 9m22s kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 9m22s kube-system kube-proxy-9frg4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m17s kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 9m22s kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9m22s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 650m (16%) 0 (0%) memory 70Mi (1%) 170Mi (4%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasNoDiskPressure 9m31s (x5 over 9m31s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 9m31s (x4 over 9m31s) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 9m31s (x5 over 9m31s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasSufficientPID 9m22s kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeHasSufficientMemory 9m22s kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 9m22s kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeNotReady 9m22s kubelet Node minikube status is now: NodeNotReady Normal NodeAllocatableEnforced 9m22s kubelet Updated Node Allocatable limit across pods Normal Starting 9m22s kubelet Starting kubelet. Normal Starting 9m14s kube-proxy Starting kube-proxy. Warning readOnlySysFS 9m14s kube-proxy CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) Normal NodeReady 9m12s kubelet Node minikube status is now: NodeReady Normal Starting 7m12s kubelet Starting kubelet. Normal NodeHasSufficientMemory 7m12s (x8 over 7m12s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 7m12s (x8 over 7m12s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 7m12s (x7 over 7m12s) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 7m12s kubelet Updated Node Allocatable limit across pods Warning readOnlySysFS 7m3s kube-proxy CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000) Normal Starting 7m3s kube-proxy Starting kube-proxy. ==> dmesg <== [Oct 6 17:57] #3 [ +0.317103] virtio-pci 0000:00:01.0: can't derive routing for PCI INT A [ +0.000854] virtio-pci 0000:00:01.0: PCI INT A: no GSI [ +0.001895] virtio-pci 0000:00:02.0: can't derive routing for PCI INT A [ +0.000920] virtio-pci 0000:00:02.0: PCI INT A: no GSI [ +0.003175] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A [ +0.000913] virtio-pci 0000:00:07.0: PCI INT A: no GSI [ +0.051360] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds). [ +0.620521] i8042: Can't read CTR while initializing i8042 [ +0.000871] i8042: probe of i8042 failed with error -5 [ +0.007534] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184) [ +0.001857] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620) [ +0.220348] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ +0.020151] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ +2.869917] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ +0.083055] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! ==> etcd [5649bdd088e2] <== 2020-10-06 18:02:05.605562 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] 2020-10-06 18:02:05.607840 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-10-06 18:02:05.608045 I | embed: listening for metrics on http://127.0.0.1:2381 2020-10-06 18:02:05.608168 I | embed: listening for peers on 192.168.49.2:2380 raft2020/10/06 18:02:05 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2020-10-06 18:02:05.609952 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be 2020-10-06 18:02:05.610096 N | etcdserver/membership: set the initial cluster version to 3.4 2020-10-06 18:02:05.610138 I | etcdserver/api: enabled capabilities for version 3.4 raft2020/10/06 18:02:06 INFO: aec36adc501070cc is starting a new election at term 2 raft2020/10/06 18:02:06 INFO: aec36adc501070cc became candidate at term 3 raft2020/10/06 18:02:06 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3 raft2020/10/06 18:02:06 INFO: aec36adc501070cc became leader at term 3 raft2020/10/06 18:02:06 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3 2020-10-06 18:02:06.607185 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be 2020-10-06 18:02:06.607618 I | embed: ready to serve client requests 2020-10-06 18:02:06.608939 I | embed: serving client requests on 127.0.0.1:2379 2020-10-06 18:02:06.622077 I | embed: ready to serve client requests 2020-10-06 18:02:06.633407 I | embed: serving client requests on 192.168.49.2:2379 2020-10-06 18:02:24.694572 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:02:33.098094 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:02:43.097695 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:02:53.099470 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:03:03.098999 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:03:13.098349 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:03:23.098467 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:03:33.099988 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:03:43.099220 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:03:53.099459 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:04:03.100016 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:04:13.100085 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:04:23.100065 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:04:33.101132 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:04:43.100737 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:04:53.101896 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:05:03.101426 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:05:13.101054 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:05:23.101156 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:05:33.104124 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:05:43.101931 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:05:53.101990 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:06:03.102952 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:06:13.102813 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:06:23.102567 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:06:33.103463 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:06:43.103122 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:06:53.104137 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:07:03.104448 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:07:13.103925 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:07:23.103744 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:07:33.104543 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:07:43.104560 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:07:53.104936 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:08:03.105096 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:08:13.105564 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:08:23.105285 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:08:33.106846 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:08:43.105858 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:08:53.107008 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:09:03.107384 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:09:13.106738 I | etcdserver/api/etcdhttp: /health OK (status code 200) ==> etcd [bfaf6018126e] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-10-06 17:59:46.400779 I | etcdmain: etcd Version: 3.4.13 2020-10-06 17:59:46.400816 I | etcdmain: Git SHA: ae9734ed2 2020-10-06 17:59:46.400819 I | etcdmain: Go Version: go1.12.17 2020-10-06 17:59:46.400821 I | etcdmain: Go OS/Arch: linux/amd64 2020-10-06 17:59:46.400823 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-10-06 17:59:46.400917 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-10-06 17:59:46.404765 I | embed: name = minikube 2020-10-06 17:59:46.404791 I | embed: data dir = /var/lib/minikube/etcd 2020-10-06 17:59:46.404795 I | embed: member dir = /var/lib/minikube/etcd/member 2020-10-06 17:59:46.404797 I | embed: heartbeat = 100ms 2020-10-06 17:59:46.404799 I | embed: election = 1000ms 2020-10-06 17:59:46.404801 I | embed: snapshot count = 10000 2020-10-06 17:59:46.404812 I | embed: advertise client URLs = https://192.168.49.2:2379 2020-10-06 17:59:46.412731 I | etcdserver: starting member aec36adc501070cc in cluster fa54960ea34d58be raft2020/10/06 17:59:46 INFO: aec36adc501070cc switched to configuration voters=() raft2020/10/06 17:59:46 INFO: aec36adc501070cc became follower at term 0 raft2020/10/06 17:59:46 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2020/10/06 17:59:46 INFO: aec36adc501070cc became follower at term 1 raft2020/10/06 17:59:46 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2020-10-06 17:59:46.421843 W | auth: simple token is not cryptographically signed 2020-10-06 17:59:46.426742 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] 2020-10-06 17:59:46.426974 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10) raft2020/10/06 17:59:46 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2020-10-06 17:59:46.428263 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be 2020-10-06 17:59:46.430741 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-10-06 17:59:46.430949 I | embed: listening for peers on 192.168.49.2:2380 2020-10-06 17:59:46.431105 I | embed: listening for metrics on http://127.0.0.1:2381 raft2020/10/06 17:59:47 INFO: aec36adc501070cc is starting a new election at term 1 raft2020/10/06 17:59:47 INFO: aec36adc501070cc became candidate at term 2 raft2020/10/06 17:59:47 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2 raft2020/10/06 17:59:47 INFO: aec36adc501070cc became leader at term 2 raft2020/10/06 17:59:47 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2 2020-10-06 17:59:47.020491 I | etcdserver: setting up the initial cluster version to 3.4 2020-10-06 17:59:47.020601 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be 2020-10-06 17:59:47.020627 I | embed: ready to serve client requests 2020-10-06 17:59:47.020749 I | embed: ready to serve client requests 2020-10-06 17:59:47.022055 N | etcdserver/membership: set the initial cluster version to 3.4 2020-10-06 17:59:47.031679 I | etcdserver/api: enabled capabilities for version 3.4 2020-10-06 17:59:47.033777 I | embed: serving client requests on 127.0.0.1:2379 2020-10-06 17:59:47.034945 I | embed: serving client requests on 192.168.49.2:2379 2020-10-06 17:59:56.781799 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:00:04.087216 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:00:14.087666 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:00:24.088201 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:00:34.090024 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:00:44.089248 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:00:54.089680 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:01:04.088787 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:01:14.088794 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:01:24.088941 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-10-06 18:01:28.917599 N | pkg/osutil: received terminated signal, shutting down... WARNING: 2020/10/06 18:01:28 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... 2020-10-06 18:01:28.933820 I | etcdserver: skipped leadership transfer for single voting member cluster ==> kernel <== 18:09:20 up 12 min, 0 users, load average: 0.42, 0.38, 0.27 Linux minikube 4.19.76-linuxkit #1 SMP Tue May 26 11:42:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04 LTS" ==> kube-apiserver [280ff2f3a128] <== I1006 18:03:30.528380 1 trace.go:205] Trace[2026800583]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:CREATE,UID:19cc711f-ab7c-46f0-9d06-a57e642fe3a8 (06-Oct-2020 18:03:29.512) (total time: 1016ms): Trace[2026800583]: [1.016168073s] [1.016168073s] END W1006 18:03:30.528429 1 dispatcher.go:182] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:03:30.528380 1 trace.go:205] Trace[2032452857]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:CREATE,UID:7e925ae6-a665-486f-9e78-e4a04801998b (06-Oct-2020 18:03:29.511) (total time: 1017ms): Trace[2032452857]: [1.017088629s] [1.017088629s] END W1006 18:03:30.528590 1 dispatcher.go:182] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:03:30.528842 1 trace.go:205] Trace[1784488907]: "Create" url:/api/v1/namespaces/gcp-auth/pods,user-agent:kube-controller-manager/v1.19.2 (linux/amd64) kubernetes/f574309/system:serviceaccount:kube-system:job-controller,client:192.168.49.2 (06-Oct-2020 18:03:29.510) (total time: 1018ms): Trace[1784488907]: [1.018528526s] [1.018528526s] END I1006 18:03:30.528878 1 trace.go:205] Trace[781666559]: "Create" url:/api/v1/namespaces/gcp-auth/pods,user-agent:kube-controller-manager/v1.19.2 (linux/amd64) kubernetes/f574309/system:serviceaccount:kube-system:job-controller,client:192.168.49.2 (06-Oct-2020 18:03:29.508) (total time: 1020ms): Trace[781666559]: [1.020243008s] [1.020243008s] END I1006 18:03:32.135330 1 client.go:360] parsed scheme: "passthrough" I1006 18:03:32.135599 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1006 18:03:32.135792 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1006 18:04:09.984287 1 client.go:360] parsed scheme: "passthrough" I1006 18:04:09.984369 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1006 18:04:09.984378 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1006 18:04:44.951958 1 client.go:360] parsed scheme: "passthrough" I1006 18:04:44.952097 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1006 18:04:44.952112 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1006 18:04:51.554234 1 trace.go:205] Trace[1849480240]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:CREATE,UID:ecf56684-49a2-442d-8898-81ed82fd36d2 (06-Oct-2020 18:04:50.538) (total time: 1015ms): Trace[1849480240]: [1.015851613s] [1.015851613s] END W1006 18:04:51.554331 1 dispatcher.go:182] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:04:51.554257 1 trace.go:205] Trace[2049980168]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:CREATE,UID:e1b04ad6-b7de-42b9-ac0c-ac602eae0845 (06-Oct-2020 18:04:50.537) (total time: 1016ms): Trace[2049980168]: [1.016479173s] [1.016479173s] END W1006 18:04:51.554491 1 dispatcher.go:182] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:04:51.554966 1 trace.go:205] Trace[709559473]: "Create" url:/api/v1/namespaces/gcp-auth/pods,user-agent:kube-controller-manager/v1.19.2 (linux/amd64) kubernetes/f574309/system:serviceaccount:kube-system:job-controller,client:192.168.49.2 (06-Oct-2020 18:04:50.535) (total time: 1019ms): Trace[709559473]: [1.019662212s] [1.019662212s] END I1006 18:04:51.555165 1 trace.go:205] Trace[618082985]: "Create" url:/api/v1/namespaces/gcp-auth/pods,user-agent:kube-controller-manager/v1.19.2 (linux/amd64) kubernetes/f574309/system:serviceaccount:kube-system:job-controller,client:192.168.49.2 (06-Oct-2020 18:04:50.535) (total time: 1019ms): Trace[618082985]: [1.019846032s] [1.019846032s] END I1006 18:05:27.335840 1 client.go:360] parsed scheme: "passthrough" I1006 18:05:27.335894 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1006 18:05:27.335901 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1006 18:06:06.704038 1 client.go:360] parsed scheme: "passthrough" I1006 18:06:06.704086 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1006 18:06:06.704097 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1006 18:06:46.502353 1 client.go:360] parsed scheme: "passthrough" I1006 18:06:46.502402 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1006 18:06:46.502409 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1006 18:07:25.425871 1 client.go:360] parsed scheme: "passthrough" I1006 18:07:25.425908 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1006 18:07:25.425915 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1006 18:07:32.581683 1 trace.go:205] Trace[942243159]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:CREATE,UID:9b1979a6-9e9c-4a86-ae91-56d5b097454e (06-Oct-2020 18:07:31.567) (total time: 1014ms): Trace[942243159]: [1.014116662s] [1.014116662s] END W1006 18:07:32.581767 1 dispatcher.go:182] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:07:32.581683 1 trace.go:205] Trace[2124337833]: "Call mutating webhook" configuration:gcp-auth-webhook-cfg,webhook:gcp-auth-mutate.k8s.io,resource:/v1, Resource=pods,subresource:,operation:CREATE,UID:739b699a-18d0-4a05-9a96-794db04e2c38 (06-Oct-2020 18:07:31.567) (total time: 1014ms): Trace[2124337833]: [1.014442321s] [1.014442321s] END W1006 18:07:32.581925 1 dispatcher.go:182] Failed calling webhook, failing closed gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:07:32.581979 1 trace.go:205] Trace[554381669]: "Create" url:/api/v1/namespaces/gcp-auth/pods,user-agent:kube-controller-manager/v1.19.2 (linux/amd64) kubernetes/f574309/system:serviceaccount:kube-system:job-controller,client:192.168.49.2 (06-Oct-2020 18:07:31.564) (total time: 1017ms): Trace[554381669]: [1.017008403s] [1.017008403s] END I1006 18:07:32.582070 1 trace.go:205] Trace[2104602391]: "Create" url:/api/v1/namespaces/gcp-auth/pods,user-agent:kube-controller-manager/v1.19.2 (linux/amd64) kubernetes/f574309/system:serviceaccount:kube-system:job-controller,client:192.168.49.2 (06-Oct-2020 18:07:31.563) (total time: 1018ms): Trace[2104602391]: [1.01855568s] [1.01855568s] END I1006 18:07:58.129522 1 client.go:360] parsed scheme: "passthrough" I1006 18:07:58.129723 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1006 18:07:58.129826 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1006 18:08:36.263645 1 client.go:360] parsed scheme: "passthrough" I1006 18:08:36.263707 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1006 18:08:36.263717 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1006 18:09:18.843626 1 client.go:360] parsed scheme: "passthrough" I1006 18:09:18.843679 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1006 18:09:18.843687 1 clientconn.go:948] ClientConn switching balancer to "pick_first" ==> kube-apiserver [e81e94e22147] <== W1006 18:01:37.478744 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.502331 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.517386 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.538065 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.557926 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.613545 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.653703 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.692096 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.694546 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.707794 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.740096 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.775103 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.777181 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.813712 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.820818 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.822188 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.846888 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.850906 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.907233 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.909539 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:37.941900 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.006784 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.036956 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.043298 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.051620 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.066719 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.105841 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.138130 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.169410 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.197876 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.244645 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.249243 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.275928 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.294711 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.307293 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.307387 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.316849 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.334733 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.352017 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.414208 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.415926 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.422584 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.423800 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.459187 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.479837 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.485283 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.506993 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.541449 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.576605 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.583142 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.584566 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.629976 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.642186 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.723554 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.828409 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.859247 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.910632 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.934990 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.944820 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1006 18:01:38.967359 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... ==> kube-controller-manager [9b63b41e708c] <== I1006 17:59:59.690704 1 endpointslice_controller.go:237] Starting endpoint slice controller I1006 17:59:59.690713 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice I1006 17:59:59.695153 1 shared_informer.go:240] Waiting for caches to sync for resource quota I1006 17:59:59.738520 1 shared_informer.go:247] Caches are synced for PV protection I1006 17:59:59.739112 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1006 17:59:59.739449 1 shared_informer.go:247] Caches are synced for bootstrap_signer I1006 17:59:59.739344 1 shared_informer.go:247] Caches are synced for service account I1006 17:59:59.740052 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring W1006 17:59:59.744201 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist E1006 17:59:59.753210 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again E1006 17:59:59.754595 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again E1006 17:59:59.764948 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again I1006 17:59:59.788971 1 shared_informer.go:247] Caches are synced for expand I1006 17:59:59.793752 1 shared_informer.go:247] Caches are synced for namespace I1006 17:59:59.833304 1 shared_informer.go:247] Caches are synced for disruption I1006 17:59:59.833511 1 disruption.go:339] Sending events to api server. I1006 17:59:59.838635 1 shared_informer.go:247] Caches are synced for TTL I1006 17:59:59.838662 1 shared_informer.go:247] Caches are synced for GC I1006 17:59:59.838717 1 shared_informer.go:247] Caches are synced for PVC protection I1006 17:59:59.839273 1 shared_informer.go:247] Caches are synced for daemon sets I1006 17:59:59.839678 1 shared_informer.go:247] Caches are synced for deployment I1006 17:59:59.840555 1 shared_informer.go:247] Caches are synced for taint I1006 17:59:59.840684 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W1006 17:59:59.840780 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. I1006 17:59:59.840818 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I1006 17:59:59.840987 1 taint_manager.go:187] Starting NoExecuteTaintManager I1006 17:59:59.843592 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I1006 17:59:59.848438 1 shared_informer.go:247] Caches are synced for attach detach I1006 17:59:59.853321 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-f9fd979d6 to 1" I1006 17:59:59.868317 1 shared_informer.go:247] Caches are synced for persistent volume I1006 17:59:59.873597 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9frg4" I1006 17:59:59.888467 1 shared_informer.go:247] Caches are synced for job I1006 17:59:59.888576 1 shared_informer.go:247] Caches are synced for ReplicationController I1006 17:59:59.889180 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I1006 17:59:59.889305 1 shared_informer.go:247] Caches are synced for ReplicaSet I1006 17:59:59.889585 1 shared_informer.go:247] Caches are synced for HPA I1006 17:59:59.890316 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1006 17:59:59.890642 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1006 17:59:59.890660 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1006 17:59:59.890914 1 shared_informer.go:247] Caches are synced for stateful set I1006 17:59:59.891167 1 shared_informer.go:247] Caches are synced for endpoint_slice I1006 17:59:59.892968 1 shared_informer.go:247] Caches are synced for endpoint I1006 17:59:59.893870 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1006 17:59:59.902471 1 event.go:291] "Event occurred" object="kube-system/coredns-f9fd979d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-f9fd979d6-p5n27" E1006 17:59:59.913080 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"f00019e9-bde6-43b0-aabe-520928f043da", ResourceVersion:"221", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63737603993, loc:(*time.Location)(0x6a59c80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00169ba80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00169baa0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00169bac0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0010f69c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00169bae0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00169bb00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.19.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00169bb40)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0016ccf00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000dcfe98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000435b20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00030bab0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000dcfee8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again I1006 17:59:59.944433 1 shared_informer.go:247] Caches are synced for resource quota I1006 17:59:59.995249 1 shared_informer.go:247] Caches are synced for resource quota I1006 18:00:00.046285 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1006 18:00:00.313828 1 shared_informer.go:247] Caches are synced for garbage collector I1006 18:00:00.313931 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1006 18:00:00.346575 1 shared_informer.go:247] Caches are synced for garbage collector I1006 18:00:04.841528 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. I1006 18:00:45.181760 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-create-qwb6z" I1006 18:00:45.206035 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-5ff8987f65 to 1" I1006 18:00:45.221184 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-5ff8987f65" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-5ff8987f65-qkw77" I1006 18:00:45.281158 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-certs-patch-sztft" I1006 18:00:50.965591 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" I1006 18:00:52.981703 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed" E1006 18:01:12.094637 1 tokens_controller.go:261] error synchronizing serviceaccount gcp-auth/default: secrets "default-token-dzclg" is forbidden: unable to create new content in namespace gcp-auth because it is being terminated I1006 18:01:22.376633 1 namespace_controller.go:185] Namespace has been deleted gcp-auth ==> kube-controller-manager [a6a3b9a242bd] <== I1006 18:02:16.336446 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1006 18:02:16.336498 1 shared_informer.go:247] Caches are synced for daemon sets I1006 18:02:16.336544 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I1006 18:02:16.336664 1 shared_informer.go:247] Caches are synced for ReplicaSet I1006 18:02:16.336817 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1006 18:02:16.336958 1 shared_informer.go:247] Caches are synced for deployment I1006 18:02:16.337564 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1006 18:02:16.338080 1 shared_informer.go:247] Caches are synced for endpoint I1006 18:02:16.338484 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1006 18:02:16.344436 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set gcp-auth-5ff8987f65 to 1" I1006 18:02:16.352940 1 shared_informer.go:247] Caches are synced for PVC protection I1006 18:02:16.356982 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-5ff8987f65" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: gcp-auth-5ff8987f65-qh2l5" I1006 18:02:16.386898 1 shared_informer.go:247] Caches are synced for job I1006 18:02:16.488872 1 shared_informer.go:247] Caches are synced for persistent volume I1006 18:02:16.491712 1 shared_informer.go:247] Caches are synced for resource quota I1006 18:02:16.527547 1 shared_informer.go:247] Caches are synced for expand I1006 18:02:16.536843 1 shared_informer.go:247] Caches are synced for PV protection I1006 18:02:16.537332 1 shared_informer.go:247] Caches are synced for HPA I1006 18:02:16.539563 1 shared_informer.go:247] Caches are synced for attach detach I1006 18:02:16.540623 1 shared_informer.go:247] Caches are synced for resource quota I1006 18:02:16.593588 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1006 18:02:16.886294 1 shared_informer.go:247] Caches are synced for garbage collector I1006 18:02:16.886314 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1006 18:02:16.893806 1 shared_informer.go:247] Caches are synced for garbage collector E1006 18:02:17.440581 1 job_controller.go:800] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:02:17.440629 1 job_controller.go:402] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:02:17.440890 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Internal error occurred: failed calling webhook \"gcp-auth-mutate.k8s.io\": Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.131.230:443: connect: connection refused" E1006 18:02:17.441408 1 job_controller.go:800] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:02:17.441700 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Internal error occurred: failed calling webhook \"gcp-auth-mutate.k8s.io\": Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.131.230:443: connect: connection refused" E1006 18:02:17.442868 1 job_controller.go:402] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:02:28.448943 1 job_controller.go:800] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:02:28.448977 1 job_controller.go:402] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:02:28.449015 1 job_controller.go:800] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:02:28.449203 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Internal error occurred: failed calling webhook \"gcp-auth-mutate.k8s.io\": Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.131.230:443: connect: connection refused" I1006 18:02:28.449249 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Internal error occurred: failed calling webhook \"gcp-auth-mutate.k8s.io\": Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.131.230:443: connect: connection refused" E1006 18:02:28.451123 1 job_controller.go:402] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:02:49.504559 1 job_controller.go:800] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:02:49.504615 1 job_controller.go:402] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:02:49.504924 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Internal error occurred: failed calling webhook \"gcp-auth-mutate.k8s.io\": Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.131.230:443: connect: connection refused" E1006 18:02:49.505077 1 job_controller.go:800] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:02:49.505097 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Internal error occurred: failed calling webhook \"gcp-auth-mutate.k8s.io\": Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.131.230:443: connect: connection refused" E1006 18:02:49.506935 1 job_controller.go:402] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:03:30.529562 1 job_controller.go:800] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:03:30.529592 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Internal error occurred: failed calling webhook \"gcp-auth-mutate.k8s.io\": Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.131.230:443: connect: connection refused" I1006 18:03:30.529619 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Internal error occurred: failed calling webhook \"gcp-auth-mutate.k8s.io\": Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.131.230:443: connect: connection refused" E1006 18:03:30.529569 1 job_controller.go:800] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:03:30.529661 1 job_controller.go:402] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:03:30.530731 1 job_controller.go:402] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:04:51.556232 1 job_controller.go:800] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:04:51.556287 1 job_controller.go:402] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:04:51.556247 1 job_controller.go:800] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:04:51.556369 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Internal error occurred: failed calling webhook \"gcp-auth-mutate.k8s.io\": Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.131.230:443: connect: connection refused" I1006 18:04:51.556393 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Internal error occurred: failed calling webhook \"gcp-auth-mutate.k8s.io\": Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.131.230:443: connect: connection refused" E1006 18:04:51.558834 1 job_controller.go:402] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:07:32.583002 1 job_controller.go:800] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:07:32.583523 1 job_controller.go:402] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused E1006 18:07:32.583003 1 job_controller.go:800] Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused I1006 18:07:32.583439 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Internal error occurred: failed calling webhook \"gcp-auth-mutate.k8s.io\": Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.131.230:443: connect: connection refused" I1006 18:07:32.583951 1 event.go:291] "Event occurred" object="gcp-auth/gcp-auth-certs-create" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Internal error occurred: failed calling webhook \"gcp-auth-mutate.k8s.io\": Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.131.230:443: connect: connection refused" E1006 18:07:32.585873 1 job_controller.go:402] Error syncing job: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused ==> kube-proxy [7aebd6535d84] <== I1006 18:02:13.040936 1 node.go:136] Successfully retrieved node IP: 192.168.49.2 I1006 18:02:13.041006 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation W1006 18:02:13.061229 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy I1006 18:02:13.061371 1 server_others.go:186] Using iptables Proxier. W1006 18:02:13.061384 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I1006 18:02:13.061402 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I1006 18:02:13.061832 1 server.go:650] Version: v1.19.2 I1006 18:02:13.062348 1 conntrack.go:52] Setting nf_conntrack_max to 131072 E1006 18:02:13.062677 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime]) I1006 18:02:13.062807 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1006 18:02:13.062854 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1006 18:02:13.063219 1 config.go:315] Starting service config controller I1006 18:02:13.063246 1 shared_informer.go:240] Waiting for caches to sync for service config I1006 18:02:13.063292 1 config.go:224] Starting endpoint slice config controller I1006 18:02:13.063298 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I1006 18:02:13.163428 1 shared_informer.go:247] Caches are synced for service config I1006 18:02:13.163429 1 shared_informer.go:247] Caches are synced for endpoint slice config ==> kube-proxy [d5cedb2219ee] <== I1006 18:00:02.169956 1 node.go:136] Successfully retrieved node IP: 192.168.49.2 I1006 18:00:02.170075 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation W1006 18:00:02.200229 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy I1006 18:00:02.200466 1 server_others.go:186] Using iptables Proxier. W1006 18:00:02.200486 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I1006 18:00:02.200516 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I1006 18:00:02.201620 1 server.go:650] Version: v1.19.2 I1006 18:00:02.202113 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I1006 18:00:02.202152 1 conntrack.go:52] Setting nf_conntrack_max to 131072 E1006 18:00:02.202596 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime]) I1006 18:00:02.202706 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1006 18:00:02.202889 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1006 18:00:02.204118 1 config.go:315] Starting service config controller I1006 18:00:02.204143 1 shared_informer.go:240] Waiting for caches to sync for service config I1006 18:00:02.205388 1 config.go:224] Starting endpoint slice config controller I1006 18:00:02.205413 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I1006 18:00:02.304773 1 shared_informer.go:247] Caches are synced for service config I1006 18:00:02.305952 1 shared_informer.go:247] Caches are synced for endpoint slice config ==> kube-scheduler [6c6d1921ad64] <== I1006 17:59:46.199203 1 registry.go:173] Registering SelectorSpread plugin I1006 17:59:46.199242 1 registry.go:173] Registering SelectorSpread plugin I1006 17:59:46.963858 1 serving.go:331] Generated self-signed cert in-memory W1006 17:59:50.339813 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1006 17:59:50.339866 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1006 17:59:50.339883 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous. W1006 17:59:50.339890 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1006 17:59:50.351930 1 registry.go:173] Registering SelectorSpread plugin I1006 17:59:50.351964 1 registry.go:173] Registering SelectorSpread plugin I1006 17:59:50.379784 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1006 17:59:50.379879 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1006 17:59:50.380147 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I1006 17:59:50.380205 1 tlsconfig.go:240] Starting DynamicServingCertificateController E1006 17:59:50.383163 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1006 17:59:50.383298 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1006 17:59:50.383387 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1006 17:59:50.383477 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1006 17:59:50.383568 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1006 17:59:50.383646 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1006 17:59:50.383746 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1006 17:59:50.383803 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1006 17:59:50.383887 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1006 17:59:50.383954 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1006 17:59:50.384676 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1006 17:59:50.383177 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1006 17:59:50.385115 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1006 17:59:51.234186 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1006 17:59:51.307121 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1006 17:59:51.351401 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1006 17:59:51.405035 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1006 17:59:51.488468 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1006 17:59:51.490156 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1006 17:59:51.502783 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope I1006 17:59:51.880156 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kube-scheduler [f774b5f85c95] <== I1006 18:02:05.415894 1 registry.go:173] Registering SelectorSpread plugin I1006 18:02:05.415968 1 registry.go:173] Registering SelectorSpread plugin I1006 18:02:06.038153 1 serving.go:331] Generated self-signed cert in-memory W1006 18:02:09.732111 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1006 18:02:09.732149 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1006 18:02:09.732164 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous. W1006 18:02:09.732168 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1006 18:02:09.808873 1 registry.go:173] Registering SelectorSpread plugin I1006 18:02:09.808904 1 registry.go:173] Registering SelectorSpread plugin I1006 18:02:09.810713 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I1006 18:02:09.816705 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1006 18:02:09.816738 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1006 18:02:09.816770 1 tlsconfig.go:240] Starting DynamicServingCertificateController I1006 18:02:09.916941 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== -- Logs begin at Tue 2020-10-06 18:01:53 UTC, end at Tue 2020-10-06 18:09:26 UTC. -- Oct 06 18:02:10 minikube kubelet[743]: E1006 18:02:10.896688 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/4599f629-9008-448b-8311-df7ed13e7902-coredns-token-d977p podName:4599f629-9008-448b-8311-df7ed13e7902 nodeName:}" failed. No retries permitted until 2020-10-06 18:02:11.39667352 +0000 UTC m=+12.870407525 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-d977p\" (UniqueName: \"kubernetes.io/secret/4599f629-9008-448b-8311-df7ed13e7902-coredns-token-d977p\") pod \"coredns-f9fd979d6-p5n27\" (UID: \"4599f629-9008-448b-8311-df7ed13e7902\") : failed to sync secret cache: timed out waiting for the condition" Oct 06 18:02:10 minikube kubelet[743]: E1006 18:02:10.896706 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/15e95217-2cae-473f-9cb3-9708e51d3c9f-kube-proxy-token-xcx6w podName:15e95217-2cae-473f-9cb3-9708e51d3c9f nodeName:}" failed. No retries permitted until 2020-10-06 18:02:11.396694908 +0000 UTC m=+12.870428908 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy-token-xcx6w\" (UniqueName: \"kubernetes.io/secret/15e95217-2cae-473f-9cb3-9708e51d3c9f-kube-proxy-token-xcx6w\") pod \"kube-proxy-9frg4\" (UID: \"15e95217-2cae-473f-9cb3-9708e51d3c9f\") : failed to sync secret cache: timed out waiting for the condition" Oct 06 18:02:10 minikube kubelet[743]: E1006 18:02:10.896731 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/4599f629-9008-448b-8311-df7ed13e7902-config-volume podName:4599f629-9008-448b-8311-df7ed13e7902 nodeName:}" failed. No retries permitted until 2020-10-06 18:02:11.39671921 +0000 UTC m=+12.870453218 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4599f629-9008-448b-8311-df7ed13e7902-config-volume\") pod \"coredns-f9fd979d6-p5n27\" (UID: \"4599f629-9008-448b-8311-df7ed13e7902\") : failed to sync configmap cache: timed out waiting for the condition" Oct 06 18:02:10 minikube kubelet[743]: E1006 18:02:10.896747 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/8fcd108f-d0c6-4def-91dd-6616e0d8fe98-storage-provisioner-token-7f9wj podName:8fcd108f-d0c6-4def-91dd-6616e0d8fe98 nodeName:}" failed. No retries permitted until 2020-10-06 18:02:11.39673774 +0000 UTC m=+12.870471740 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-7f9wj\" (UniqueName: \"kubernetes.io/secret/8fcd108f-d0c6-4def-91dd-6616e0d8fe98-storage-provisioner-token-7f9wj\") pod \"storage-provisioner\" (UID: \"8fcd108f-d0c6-4def-91dd-6616e0d8fe98\") : failed to sync secret cache: timed out waiting for the condition" Oct 06 18:02:12 minikube kubelet[743]: W1006 18:02:12.745368 743 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-p5n27 through plugin: invalid network status for Oct 06 18:02:12 minikube kubelet[743]: W1006 18:02:12.781300 743 pod_container_deletor.go:79] Container "1012505b72ef47c7cea8ddf622ce001edb3df63f2f0ec4d8bf641a8ad8358368" not found in pod's containers Oct 06 18:02:12 minikube kubelet[743]: I1006 18:02:12.792681 743 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 3d16bfbdca1019512296aa7988572371a207a7e4df059e0a515809ac3ff6014d Oct 06 18:02:12 minikube kubelet[743]: I1006 18:02:12.793033 743 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b3b1dd662e2383455aa5ebcd96932b308c7926b1bc003f4c57e0b6e0d7af6794 Oct 06 18:02:12 minikube kubelet[743]: E1006 18:02:12.793278 743 pod_workers.go:191] Error syncing pod 8fcd108f-d0c6-4def-91dd-6616e0d8fe98 ("storage-provisioner_kube-system(8fcd108f-d0c6-4def-91dd-6616e0d8fe98)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8fcd108f-d0c6-4def-91dd-6616e0d8fe98)" Oct 06 18:02:12 minikube kubelet[743]: W1006 18:02:12.801216 743 pod_container_deletor.go:79] Container "78cb183a3352b68514831ea09f44a1d439c586306c515543b5d5a8756957580f" not found in pod's containers Oct 06 18:02:13 minikube kubelet[743]: W1006 18:02:13.814861 743 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-f9fd979d6-p5n27 through plugin: invalid network status for Oct 06 18:02:13 minikube kubelet[743]: I1006 18:02:13.858988 743 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b3b1dd662e2383455aa5ebcd96932b308c7926b1bc003f4c57e0b6e0d7af6794 Oct 06 18:02:13 minikube kubelet[743]: E1006 18:02:13.859462 743 pod_workers.go:191] Error syncing pod 8fcd108f-d0c6-4def-91dd-6616e0d8fe98 ("storage-provisioner_kube-system(8fcd108f-d0c6-4def-91dd-6616e0d8fe98)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8fcd108f-d0c6-4def-91dd-6616e0d8fe98)" Oct 06 18:02:14 minikube kubelet[743]: E1006 18:02:14.270041 743 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 06 18:02:14 minikube kubelet[743]: E1006 18:02:14.270096 743 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 06 18:02:16 minikube kubelet[743]: I1006 18:02:16.366359 743 topology_manager.go:233] [topologymanager] Topology Admit Handler Oct 06 18:02:16 minikube kubelet[743]: I1006 18:02:16.423477 743 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "gcp-project" (UniqueName: "kubernetes.io/host-path/82bce830-6b44-40b0-8a42-7beb392c6b84-gcp-project") pod "gcp-auth-5ff8987f65-qh2l5" (UID: "82bce830-6b44-40b0-8a42-7beb392c6b84") Oct 06 18:02:16 minikube kubelet[743]: I1006 18:02:16.423595 743 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs") pod "gcp-auth-5ff8987f65-qh2l5" (UID: "82bce830-6b44-40b0-8a42-7beb392c6b84") Oct 06 18:02:16 minikube kubelet[743]: I1006 18:02:16.423623 743 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-hcxk7" (UniqueName: "kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-default-token-hcxk7") pod "gcp-auth-5ff8987f65-qh2l5" (UID: "82bce830-6b44-40b0-8a42-7beb392c6b84") Oct 06 18:02:16 minikube kubelet[743]: E1006 18:02:16.524214 743 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Oct 06 18:02:16 minikube kubelet[743]: E1006 18:02:16.524352 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs podName:82bce830-6b44-40b0-8a42-7beb392c6b84 nodeName:}" failed. No retries permitted until 2020-10-06 18:02:17.024325715 +0000 UTC m=+18.498059734 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs\") pod \"gcp-auth-5ff8987f65-qh2l5\" (UID: \"82bce830-6b44-40b0-8a42-7beb392c6b84\") : secret \"gcp-auth-certs\" not found" Oct 06 18:02:17 minikube kubelet[743]: E1006 18:02:17.026982 743 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Oct 06 18:02:17 minikube kubelet[743]: E1006 18:02:17.027213 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs podName:82bce830-6b44-40b0-8a42-7beb392c6b84 nodeName:}" failed. No retries permitted until 2020-10-06 18:02:18.027189919 +0000 UTC m=+19.500923924 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs\") pod \"gcp-auth-5ff8987f65-qh2l5\" (UID: \"82bce830-6b44-40b0-8a42-7beb392c6b84\") : secret \"gcp-auth-certs\" not found" Oct 06 18:02:18 minikube kubelet[743]: E1006 18:02:18.032268 743 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Oct 06 18:02:18 minikube kubelet[743]: E1006 18:02:18.032359 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs podName:82bce830-6b44-40b0-8a42-7beb392c6b84 nodeName:}" failed. No retries permitted until 2020-10-06 18:02:20.032341899 +0000 UTC m=+21.506075900 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs\") pod \"gcp-auth-5ff8987f65-qh2l5\" (UID: \"82bce830-6b44-40b0-8a42-7beb392c6b84\") : secret \"gcp-auth-certs\" not found" Oct 06 18:02:20 minikube kubelet[743]: E1006 18:02:20.041023 743 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Oct 06 18:02:20 minikube kubelet[743]: E1006 18:02:20.041270 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs podName:82bce830-6b44-40b0-8a42-7beb392c6b84 nodeName:}" failed. No retries permitted until 2020-10-06 18:02:24.041242067 +0000 UTC m=+25.514976081 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs\") pod \"gcp-auth-5ff8987f65-qh2l5\" (UID: \"82bce830-6b44-40b0-8a42-7beb392c6b84\") : secret \"gcp-auth-certs\" not found" Oct 06 18:02:24 minikube kubelet[743]: E1006 18:02:24.061526 743 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Oct 06 18:02:24 minikube kubelet[743]: E1006 18:02:24.061646 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs podName:82bce830-6b44-40b0-8a42-7beb392c6b84 nodeName:}" failed. No retries permitted until 2020-10-06 18:02:32.061620223 +0000 UTC m=+33.535354226 (durationBeforeRetry 8s). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs\") pod \"gcp-auth-5ff8987f65-qh2l5\" (UID: \"82bce830-6b44-40b0-8a42-7beb392c6b84\") : secret \"gcp-auth-certs\" not found" Oct 06 18:02:24 minikube kubelet[743]: E1006 18:02:24.280141 743 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 06 18:02:24 minikube kubelet[743]: E1006 18:02:24.280223 743 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 06 18:02:25 minikube kubelet[743]: I1006 18:02:25.145552 743 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b3b1dd662e2383455aa5ebcd96932b308c7926b1bc003f4c57e0b6e0d7af6794 Oct 06 18:02:32 minikube kubelet[743]: E1006 18:02:32.097038 743 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Oct 06 18:02:32 minikube kubelet[743]: E1006 18:02:32.097194 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs podName:82bce830-6b44-40b0-8a42-7beb392c6b84 nodeName:}" failed. No retries permitted until 2020-10-06 18:02:48.09716927 +0000 UTC m=+49.570301353 (durationBeforeRetry 16s). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs\") pod \"gcp-auth-5ff8987f65-qh2l5\" (UID: \"82bce830-6b44-40b0-8a42-7beb392c6b84\") : secret \"gcp-auth-certs\" not found" Oct 06 18:02:34 minikube kubelet[743]: E1006 18:02:34.295141 743 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 06 18:02:34 minikube kubelet[743]: E1006 18:02:34.295182 743 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 06 18:02:44 minikube kubelet[743]: E1006 18:02:44.311939 743 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 06 18:02:44 minikube kubelet[743]: E1006 18:02:44.312539 743 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 06 18:02:48 minikube kubelet[743]: E1006 18:02:48.167127 743 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Oct 06 18:02:48 minikube kubelet[743]: E1006 18:02:48.167820 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs podName:82bce830-6b44-40b0-8a42-7beb392c6b84 nodeName:}" failed. No retries permitted until 2020-10-06 18:03:20.167797176 +0000 UTC m=+81.640929255 (durationBeforeRetry 32s). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs\") pod \"gcp-auth-5ff8987f65-qh2l5\" (UID: \"82bce830-6b44-40b0-8a42-7beb392c6b84\") : secret \"gcp-auth-certs\" not found" Oct 06 18:02:54 minikube kubelet[743]: E1006 18:02:54.323980 743 summary_sys_containers.go:47] Failed to get system container stats for "/kubepods": failed to get cgroup stats for "/kubepods": failed to get container info for "/kubepods": unknown container "/kubepods" Oct 06 18:02:54 minikube kubelet[743]: E1006 18:02:54.324029 743 helpers.go:713] eviction manager: failed to construct signal: "allocatableMemory.available" error: system container "pods" not found in metrics Oct 06 18:03:04 minikube kubelet[743]: I1006 18:03:04.117029 743 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 802bb1477852957d81f4237f73ea277e58449acede610fbb1211581ba02c5fdc Oct 06 18:03:04 minikube kubelet[743]: I1006 18:03:04.127720 743 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 01aee89bccce91df0420cd400eba3e884cac45a2b1ba6946c227688f2ff849a0 Oct 06 18:03:20 minikube kubelet[743]: E1006 18:03:20.200669 743 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Oct 06 18:03:20 minikube kubelet[743]: E1006 18:03:20.200845 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs podName:82bce830-6b44-40b0-8a42-7beb392c6b84 nodeName:}" failed. No retries permitted until 2020-10-06 18:04:24.200801729 +0000 UTC m=+145.673300554 (durationBeforeRetry 1m4s). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs\") pod \"gcp-auth-5ff8987f65-qh2l5\" (UID: \"82bce830-6b44-40b0-8a42-7beb392c6b84\") : secret \"gcp-auth-certs\" not found" Oct 06 18:04:19 minikube kubelet[743]: E1006 18:04:19.385552 743 kubelet.go:1594] Unable to attach or mount volumes for pod "gcp-auth-5ff8987f65-qh2l5_gcp-auth(82bce830-6b44-40b0-8a42-7beb392c6b84)": unmounted volumes=[webhook-certs], unattached volumes=[default-token-hcxk7 webhook-certs gcp-project]: timed out waiting for the condition; skipping pod Oct 06 18:04:19 minikube kubelet[743]: E1006 18:04:19.385620 743 pod_workers.go:191] Error syncing pod 82bce830-6b44-40b0-8a42-7beb392c6b84 ("gcp-auth-5ff8987f65-qh2l5_gcp-auth(82bce830-6b44-40b0-8a42-7beb392c6b84)"), skipping: unmounted volumes=[webhook-certs], unattached volumes=[default-token-hcxk7 webhook-certs gcp-project]: timed out waiting for the condition Oct 06 18:04:24 minikube kubelet[743]: E1006 18:04:24.273447 743 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Oct 06 18:04:24 minikube kubelet[743]: E1006 18:04:24.273610 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs podName:82bce830-6b44-40b0-8a42-7beb392c6b84 nodeName:}" failed. No retries permitted until 2020-10-06 18:06:26.273583262 +0000 UTC m=+267.744679101 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs\") pod \"gcp-auth-5ff8987f65-qh2l5\" (UID: \"82bce830-6b44-40b0-8a42-7beb392c6b84\") : secret \"gcp-auth-certs\" not found" Oct 06 18:06:26 minikube kubelet[743]: E1006 18:06:26.281740 743 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Oct 06 18:06:26 minikube kubelet[743]: E1006 18:06:26.281867 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs podName:82bce830-6b44-40b0-8a42-7beb392c6b84 nodeName:}" failed. No retries permitted until 2020-10-06 18:08:28.281845479 +0000 UTC m=+389.749685839 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs\") pod \"gcp-auth-5ff8987f65-qh2l5\" (UID: \"82bce830-6b44-40b0-8a42-7beb392c6b84\") : secret \"gcp-auth-certs\" not found" Oct 06 18:06:35 minikube kubelet[743]: E1006 18:06:35.151792 743 kubelet.go:1594] Unable to attach or mount volumes for pod "gcp-auth-5ff8987f65-qh2l5_gcp-auth(82bce830-6b44-40b0-8a42-7beb392c6b84)": unmounted volumes=[webhook-certs], unattached volumes=[webhook-certs gcp-project default-token-hcxk7]: timed out waiting for the condition; skipping pod Oct 06 18:06:35 minikube kubelet[743]: E1006 18:06:35.151834 743 pod_workers.go:191] Error syncing pod 82bce830-6b44-40b0-8a42-7beb392c6b84 ("gcp-auth-5ff8987f65-qh2l5_gcp-auth(82bce830-6b44-40b0-8a42-7beb392c6b84)"), skipping: unmounted volumes=[webhook-certs], unattached volumes=[webhook-certs gcp-project default-token-hcxk7]: timed out waiting for the condition Oct 06 18:07:04 minikube kubelet[743]: W1006 18:07:04.206412 743 sysinfo.go:203] Nodes topology is not available, providing CPU topology Oct 06 18:07:04 minikube kubelet[743]: W1006 18:07:04.206627 743 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory Oct 06 18:08:28 minikube kubelet[743]: E1006 18:08:28.300739 743 secret.go:195] Couldn't get secret gcp-auth/gcp-auth-certs: secret "gcp-auth-certs" not found Oct 06 18:08:28 minikube kubelet[743]: E1006 18:08:28.300901 743 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs podName:82bce830-6b44-40b0-8a42-7beb392c6b84 nodeName:}" failed. No retries permitted until 2020-10-06 18:10:30.300875272 +0000 UTC m=+511.766048518 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/82bce830-6b44-40b0-8a42-7beb392c6b84-webhook-certs\") pod \"gcp-auth-5ff8987f65-qh2l5\" (UID: \"82bce830-6b44-40b0-8a42-7beb392c6b84\") : secret \"gcp-auth-certs\" not found" Oct 06 18:08:49 minikube kubelet[743]: E1006 18:08:49.154177 743 kubelet.go:1594] Unable to attach or mount volumes for pod "gcp-auth-5ff8987f65-qh2l5_gcp-auth(82bce830-6b44-40b0-8a42-7beb392c6b84)": unmounted volumes=[webhook-certs], unattached volumes=[webhook-certs gcp-project default-token-hcxk7]: timed out waiting for the condition; skipping pod Oct 06 18:08:49 minikube kubelet[743]: E1006 18:08:49.154765 743 pod_workers.go:191] Error syncing pod 82bce830-6b44-40b0-8a42-7beb392c6b84 ("gcp-auth-5ff8987f65-qh2l5_gcp-auth(82bce830-6b44-40b0-8a42-7beb392c6b84)"), skipping: unmounted volumes=[webhook-certs], unattached volumes=[webhook-certs gcp-project default-token-hcxk7]: timed out waiting for the condition ==> storage-provisioner [4799383a11f2] <== I1006 18:02:25.301993 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I1006 18:02:42.702725 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I1006 18:02:42.716581 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0e085e49-48ac-4cc0-8c30-a1381dbdee0e", APIVersion:"v1", ResourceVersion:"737", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_0ece73c1-55a8-4290-844d-35d85e53ff2e became leader I1006 18:02:42.717182 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_0ece73c1-55a8-4290-844d-35d85e53ff2e! I1006 18:02:42.818273 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_0ece73c1-55a8-4290-844d-35d85e53ff2e! ==> storage-provisioner [b3b1dd662e23] <== F1006 18:02:12.334485 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": x509: certificate signed by unknown authority ```
matthewmichihara commented 3 years ago
$ kubectl get event -A | grep gcp-auth
gcp-auth      21m         Normal    Scheduled                 pod/gcp-auth-5ff8987f65-qh2l5          Successfully assigned gcp-auth/gcp-auth-5ff8987f65-qh2l5 to minikube
gcp-auth      39s         Warning   FailedMount               pod/gcp-auth-5ff8987f65-qh2l5          MountVolume.SetUp failed for volume "webhook-certs" : secret "gcp-auth-certs" not found
gcp-auth      19m         Warning   FailedMount               pod/gcp-auth-5ff8987f65-qh2l5          Unable to attach or mount volumes: unmounted volumes=[webhook-certs], unattached volumes=[default-token-hcxk7 webhook-certs gcp-project]: timed out waiting for the condition
gcp-auth      62s         Warning   FailedMount               pod/gcp-auth-5ff8987f65-qh2l5          Unable to attach or mount volumes: unmounted volumes=[webhook-certs], unattached volumes=[webhook-certs gcp-project default-token-hcxk7]: timed out waiting for the condition
gcp-auth      3m16s       Warning   FailedMount               pod/gcp-auth-5ff8987f65-qh2l5          Unable to attach or mount volumes: unmounted volumes=[webhook-certs], unattached volumes=[gcp-project default-token-hcxk7 webhook-certs]: timed out waiting for the condition
gcp-auth      21m         Normal    SuccessfulCreate          replicaset/gcp-auth-5ff8987f65         Created pod: gcp-auth-5ff8987f65-qh2l5
gcp-auth      4m27s       Warning   FailedCreate              job/gcp-auth-certs-create              Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused
gcp-auth      4m27s       Warning   FailedCreate              job/gcp-auth-certs-patch               Error creating: Internal error occurred: failed calling webhook "gcp-auth-mutate.k8s.io": Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.131.230:443: connect: connection refused
gcp-auth      21m         Normal    ScalingReplicaSet         deployment/gcp-auth                    Scaled up replica set gcp-auth-5ff8987f65 to 1
sharifelgamal commented 3 years ago

I'm hoping #9406 will fix this issue.

matthewmichihara commented 3 years ago

I'm hoping #9406 will fix this issue.

@sharifelgamal Ran a bunch of different combinations of stopping/starting minikube with gcp-auth and haven't seen the issues anyone. I think it fixes it for me!