kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.17k stars 4.87k forks source link

Enabling 'ingress' returned an error #9858

Closed jasperf closed 3 years ago

jasperf commented 3 years ago

Steps to reproduce the issue:

At start of latest Minikube I get this output including an error related to Ingress . I would like to know if this can be remedied. Doubt it is related to the mount issues I have but to have Ingress work well or better it would be good to deal with this issue.

Full output of minikube start command used, if not already included:

πŸ˜„  minikube v1.15.1 on Darwin 10.15.7
✨  Using the hyperkit driver based on existing profile
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”„  Restarting existing hyperkit VM for "minikube" ...
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.8 ...
πŸ”Ž  Verifying Kubernetes components...
πŸ”Ž  Verifying ingress addon...
❗  Enabling 'ingress' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.4/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: Process exited with status 1
stdout:
configmap/nginx-load-balancer-conf unchanged
configmap/tcp-services unchanged
configmap/udp-services unchanged
serviceaccount/ingress-nginx unchanged
serviceaccount/ingress-nginx-admission unchanged
clusterrole.rbac.authorization.k8s.io/system::ingress-nginx unchanged
role.rbac.authorization.k8s.io/system::ingress-nginx unchanged
rolebinding.rbac.authorization.k8s.io/system::ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/system::ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
rolebinding.rbac.authorization.k8s.io/system::ingress-nginx-admission unchanged
deployment.apps/ingress-nginx-controller unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
service/ingress-nginx-controller-admission unchanged

stderr:
Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
Warning: admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-create\",\"namespace\":\"kube-system\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-create\"},\"spec\":{\"containers\":[{\"args\":[\"create\",\"--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.kube-system.svc\",\"--namespace=kube-system\",\"--secret-name=ingress-nginx-admission\"],\"image\":\"jettech/kube-webhook-certgen:v1.2.2\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"create\"}],\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"create"}],"containers":[{"image":"jettech/kube-webhook-certgen:v1.2.2","name":"create"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "ingress-nginx-admission-create", Namespace: "kube-system"
for: "/etc/kubernetes/addons/ingress-dp.yaml": Job.batch "ingress-nginx-admission-create" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-create", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "controller-uid":"64158f1f-9672-451b-bd19-e1fc4f5b85ca", "job-name":"ingress-nginx-admission-create"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"create", Image:"jettech/kube-webhook-certgen:v1.2.2", Command:[]string(nil), Args:[]string{"create", "--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.kube-system.svc", "--namespace=kube-system", "--secret-name=ingress-nginx-admission"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc007a54520), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc0030cb900), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-patch\",\"namespace\":\"kube-system\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-patch\"},\"spec\":{\"containers\":[{\"args\":[\"patch\",\"--webhook-name=ingress-nginx-admission\",\"--namespace=kube-system\",\"--patch-mutating=false\",\"--secret-name=ingress-nginx-admission\",\"--patch-failure-policy=Fail\"],\"image\":\"jettech/kube-webhook-certgen:v1.3.0\",\"imagePullPolicy\":null,\"name\":\"patch\"}],\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"patch"}],"containers":[{"image":"jettech/kube-webhook-certgen:v1.3.0","imagePullPolicy":null,"name":"patch"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "ingress-nginx-admission-patch", Namespace: "kube-system"
for: "/etc/kubernetes/addons/ingress-dp.yaml": Job.batch "ingress-nginx-admission-patch" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-patch", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "controller-uid":"ff92fd68-0147-494e-9387-7303d956fe77", "job-name":"ingress-nginx-admission-patch"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"patch", Image:"jettech/kube-webhook-certgen:v1.3.0", Command:[]string(nil), Args:[]string{"patch", "--webhook-name=ingress-nginx-admission", "--namespace=kube-system", "--patch-mutating=false", "--secret-name=ingress-nginx-admission", "--patch-failure-policy=Fail"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc0075b1fe0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc0030f2680), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable
]
🌟  Enabled addons: storage-provisioner, default-storageclass, dashboard
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "" namespace by default

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Sat 2020-12-05 03:12:29 UTC, end at Sat 2020-12-05 03:33:31 UTC. -- Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.180244416Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.180256948Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.180317737Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.180363995Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.180379951Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.180392167Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.180526740Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock" Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.180604691Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock" Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.180622174Z" level=info msg="containerd successfully booted in 0.009204s" Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.196676333Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.196732991Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.196762203Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.196782525Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.198404212Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.198448922Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.198471246Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Dec 05 03:12:58 minikube dockerd[1876]: time="2020-12-05T03:12:58.198484130Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 05 03:12:59 minikube dockerd[1876]: time="2020-12-05T03:12:59.642352127Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 05 03:12:59 minikube dockerd[1876]: time="2020-12-05T03:12:59.643187090Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 05 03:12:59 minikube dockerd[1876]: time="2020-12-05T03:12:59.643270655Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Dec 05 03:12:59 minikube dockerd[1876]: time="2020-12-05T03:12:59.643412694Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Dec 05 03:12:59 minikube dockerd[1876]: time="2020-12-05T03:12:59.643551298Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Dec 05 03:12:59 minikube dockerd[1876]: time="2020-12-05T03:12:59.643701636Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Dec 05 03:12:59 minikube dockerd[1876]: time="2020-12-05T03:12:59.644294906Z" level=info msg="Loading containers: start." Dec 05 03:12:59 minikube dockerd[1876]: time="2020-12-05T03:12:59.932716908Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 05 03:13:00 minikube dockerd[1876]: time="2020-12-05T03:13:00.458663099Z" level=info msg="Loading containers: done." Dec 05 03:13:00 minikube dockerd[1876]: time="2020-12-05T03:13:00.491635149Z" level=info msg="Docker daemon" commit=afacb8b7f0 graphdriver(s)=overlay2 version=19.03.8 Dec 05 03:13:00 minikube dockerd[1876]: time="2020-12-05T03:13:00.492902423Z" level=info msg="Daemon has completed initialization" Dec 05 03:13:00 minikube dockerd[1876]: time="2020-12-05T03:13:00.521631647Z" level=info msg="API listen on [::]:2376" Dec 05 03:13:00 minikube systemd[1]: Started Docker Application Container Engine. Dec 05 03:13:00 minikube dockerd[1876]: time="2020-12-05T03:13:00.522962441Z" level=info msg="API listen on /var/run/docker.sock" Dec 05 03:13:14 minikube dockerd[1876]: time="2020-12-05T03:13:14.208035459Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/38655bb53e06a2926984d58383ae7ba53e607e15b5832688bae2a06f0291e0d4/shim.sock" debug=false pid=2873 Dec 05 03:13:14 minikube dockerd[1876]: time="2020-12-05T03:13:14.568402764Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/02e23eb17eed0c4fae21c823d96ce70728de0b11f5221fd3ac609986ed988de2/shim.sock" debug=false pid=2983 Dec 05 03:13:14 minikube dockerd[1876]: time="2020-12-05T03:13:14.617617685Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1d4e03dbc28a60776518b1a5cfb259c55d11a778bbdb3016debb27ab3d9f8aad/shim.sock" debug=false pid=2984 Dec 05 03:13:14 minikube dockerd[1876]: time="2020-12-05T03:13:14.738488508Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5ccfad4f188bfd79d0b055ba90da62f72deb6ff546740ad59c135d13dee8875d/shim.sock" debug=false pid=3033 Dec 05 03:13:15 minikube dockerd[1876]: time="2020-12-05T03:13:15.221694962Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bf546b76fd6c3728bb1e57dddac6602d38e8b17a2528620d4b5e92ac62270085/shim.sock" debug=false pid=3173 Dec 05 03:13:16 minikube dockerd[1876]: time="2020-12-05T03:13:16.487487541Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5499ec0e52cf7c911f95a963b6c016a76160eb4b7139724dbed1529430dccec2/shim.sock" debug=false pid=3391 Dec 05 03:13:17 minikube dockerd[1876]: time="2020-12-05T03:13:17.184713380Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/af23062ba6e05494b8de300f6613553d8e7e765f51c1ce3722f66fac2e3b747c/shim.sock" debug=false pid=3419 Dec 05 03:13:18 minikube dockerd[1876]: time="2020-12-05T03:13:18.436685252Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a78a0fd3074352d30c21354ca98af3aad8d8e975f3e66fd43d5d43f7af7a5629/shim.sock" debug=false pid=3468 Dec 05 03:14:14 minikube dockerd[1876]: time="2020-12-05T03:14:14.102398043Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/99f2505320f6e1d72c29ef4bb259bfde598099648459899252d5d35fe1da0bae/shim.sock" debug=false pid=4077 Dec 05 03:14:14 minikube dockerd[1876]: time="2020-12-05T03:14:14.103176729Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e4407c7039da52d02e43759f4e04a6ce1709d8c87382479ea73a720f8707b580/shim.sock" debug=false pid=4078 Dec 05 03:14:19 minikube dockerd[1876]: time="2020-12-05T03:14:19.111228365Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c340e26f8205331b352ef6e2e5f9675a9e8b75732e09ff5c915edf148998bb14/shim.sock" debug=false pid=4226 Dec 05 03:14:19 minikube dockerd[1876]: time="2020-12-05T03:14:19.245223677Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/814b624173b71e3f83df963c9f9509a4263c9d89310462677e14b9b2fe94816a/shim.sock" debug=false pid=4269 Dec 05 03:14:19 minikube dockerd[1876]: time="2020-12-05T03:14:19.271400248Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c66ee4319ba5f69a7ab1b6bf97394a1c231a19598bf031532204b87619076f42/shim.sock" debug=false pid=4285 Dec 05 03:14:19 minikube dockerd[1876]: time="2020-12-05T03:14:19.401891026Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/20516897788c93d677e8aff3a599d88a8836df8a918a3abbce6269312423f698/shim.sock" debug=false pid=4367 Dec 05 03:14:20 minikube dockerd[1876]: time="2020-12-05T03:14:20.093582271Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b39fbb1b77970c0376c188f6238a5093b7a88ec0c8e03df0e997861a1f5e7df1/shim.sock" debug=false pid=4466 Dec 05 03:14:20 minikube dockerd[1876]: time="2020-12-05T03:14:20.132438733Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d00bd7ee0aaf908b77c45b4aa8636958c4fa3c8de318d4df999e8f13ae80cc44/shim.sock" debug=false pid=4476 Dec 05 03:14:23 minikube dockerd[1876]: time="2020-12-05T03:14:23.234001405Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d88b2861e78d678886b91d608234bd67600d8a52066d900837e52cc7cfa280c3/shim.sock" debug=false pid=4551 Dec 05 03:14:23 minikube dockerd[1876]: time="2020-12-05T03:14:23.914912922Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a4f4120035a5f627efa246779b912886627bd0feca3dce1328e672927cd85d73/shim.sock" debug=false pid=4584 Dec 05 03:14:32 minikube dockerd[1876]: time="2020-12-05T03:14:32.273588815Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b1505a439b244b1a056be94926454398a7db4a4ba5203947acaa7e2f24a67b35/shim.sock" debug=false pid=4661 Dec 05 03:14:32 minikube dockerd[1876]: time="2020-12-05T03:14:32.504841741Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/941f7adaacb9b20b5c3e9670fd1b39c27b925c957db61ab884cf63f91fbb7f4b/shim.sock" debug=false pid=4694 Dec 05 03:14:57 minikube dockerd[1876]: time="2020-12-05T03:14:57.285842083Z" level=info msg="shim reaped" id=a4f4120035a5f627efa246779b912886627bd0feca3dce1328e672927cd85d73 Dec 05 03:14:57 minikube dockerd[1876]: time="2020-12-05T03:14:57.296194869Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Dec 05 03:14:57 minikube dockerd[1876]: time="2020-12-05T03:14:57.384152809Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e85a7fc89758876ab5cb06f0e73f6c17a5d1b13ecc9e34ec2ad80dd6bd4c4eb4/shim.sock" debug=false pid=4940 Dec 05 03:15:02 minikube dockerd[1876]: time="2020-12-05T03:15:02.182784431Z" level=info msg="shim reaped" id=d00bd7ee0aaf908b77c45b4aa8636958c4fa3c8de318d4df999e8f13ae80cc44 Dec 05 03:15:02 minikube dockerd[1876]: time="2020-12-05T03:15:02.193756629Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Dec 05 03:15:14 minikube dockerd[1876]: time="2020-12-05T03:15:14.594103341Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0d02d92d9e51300a2b2412fb30b71cbdf02d9a77159cf3393c2841c12d1a3a98/shim.sock" debug=false pid=5214 Dec 05 03:15:15 minikube dockerd[1876]: time="2020-12-05T03:15:15.419366780Z" level=info msg="shim reaped" id=b1505a439b244b1a056be94926454398a7db4a4ba5203947acaa7e2f24a67b35 Dec 05 03:15:15 minikube dockerd[1876]: time="2020-12-05T03:15:15.429756611Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Dec 05 03:18:02 minikube dockerd[1876]: time="2020-12-05T03:18:02.569300438Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/31e7e0f33da832ea515f4dcaf8040adec03b03fcd3405c483a93402472533e28/shim.sock" debug=false pid=5901 ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 31e7e0f33da83 503bc4b7440b9 15 minutes ago Running kubernetes-dashboard 10 99f2505320f6e 0d02d92d9e513 bad58561c4be7 18 minutes ago Running storage-provisioner 14 c340e26f82053 e85a7fc897588 4b26fa2d90ae3 18 minutes ago Running controller 9 20516897788c9 b1505a439b244 503bc4b7440b9 19 minutes ago Exited kubernetes-dashboard 9 99f2505320f6e 941f7adaacb9b 635b36f4d89f0 19 minutes ago Running kube-proxy 4 e4407c7039da5 d88b2861e78d6 86262685d9abb 19 minutes ago Running dashboard-metrics-scraper 5 814b624173b71 a4f4120035a5f 4b26fa2d90ae3 19 minutes ago Exited controller 8 20516897788c9 d00bd7ee0aaf9 bad58561c4be7 19 minutes ago Exited storage-provisioner 13 c340e26f82053 b39fbb1b77970 bfe3a36ebd252 19 minutes ago Running coredns 4 c66ee4319ba5f a78a0fd307435 0369cf4303ffd 20 minutes ago Running etcd 4 02e23eb17eed0 af23062ba6e05 4830ab6185860 20 minutes ago Running kube-controller-manager 6 5ccfad4f188bf 5499ec0e52cf7 b15c6247777d7 20 minutes ago Running kube-apiserver 6 1d4e03dbc28a6 bf546b76fd6c3 14cd22f7abe78 20 minutes ago Running kube-scheduler 4 38655bb53e06a 3accdb3e332fd bfe3a36ebd252 About an hour ago Exited coredns 3 e3be0c1db8d25 ae3c5c574dddb 86262685d9abb About an hour ago Exited dashboard-metrics-scraper 4 889a60d6073d8 ddc5c4dc8d1f4 635b36f4d89f0 About an hour ago Exited kube-proxy 3 6ce254e0001df 366c0897a1e89 0369cf4303ffd About an hour ago Exited etcd 3 1d2aa82a25877 f3ee10c11d9e7 b15c6247777d7 About an hour ago Exited kube-apiserver 5 a319643c1cdf4 3745203f95a6b 4830ab6185860 About an hour ago Exited kube-controller-manager 5 a30129773d91d 6016cd133098c 14cd22f7abe78 About an hour ago Exited kube-scheduler 3 1a62cc6652ccc 8e8f0012c3f4f jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 4 days ago Exited patch 0 cb05b8b3c1dfc 9f5bef8352374 jettech/kube-webhook-certgen@sha256:c6f018afe5dfce02110b332ea75bb846144e65d4993c7534886d8505a6960357 4 days ago Exited create 0 c92651846f394 ==> coredns [3accdb3e332f] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s ==> coredns [b39fbb1b7797] <== I1205 03:15:09.288802 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-12-05 03:14:39.183064278 +0000 UTC m=+15.871723840) (total time: 30.105582334s): Trace[2019727887]: [30.105582334s] [30.105582334s] END I1205 03:15:09.289923 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-12-05 03:14:39.182970183 +0000 UTC m=+15.871629815) (total time: 30.106929356s): Trace[1427131847]: [30.106929356s] [30.106929356s] END E1205 03:15:09.297602 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout E1205 03:15:09.297636 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout I1205 03:15:09.297858 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2020-12-05 03:14:39.182950348 +0000 UTC m=+15.871609986) (total time: 30.114873442s): Trace[911902081]: [30.114873442s] [30.114873442s] END E1205 03:15:09.297911 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/ready: Still waiting on: "kubernetes" .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" ==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=57e2f55f47effe9ce396cea42a1e0eb4f611ebbd minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_11_30T18_24_31_0700 minikube.k8s.io/version=v1.11.0 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 30 Nov 2020 11:24:27 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Sat, 05 Dec 2020 03:33:27 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 05 Dec 2020 03:29:00 +0000 Mon, 30 Nov 2020 11:24:22 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 05 Dec 2020 03:29:00 +0000 Mon, 30 Nov 2020 11:24:22 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 05 Dec 2020 03:29:00 +0000 Mon, 30 Nov 2020 11:24:22 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 05 Dec 2020 03:29:00 +0000 Sat, 05 Dec 2020 01:54:05 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.64.5 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 16954224Ki hugepages-2Mi: 0 memory: 3936924Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 16954224Ki hugepages-2Mi: 0 memory: 3936924Ki pods: 110 System Info: Machine ID: 38bdace98f6349c4b9e88a10db7704cd System UUID: 698e11eb-0000-0000-bfd0-a886ddab0165 Boot ID: 1b50f999-061b-4bd1-a32c-43693081ccdf Kernel Version: 4.19.107 OS Image: Buildroot 2019.02.10 Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.8 Kubelet Version: v1.19.4 Kube-Proxy Version: v1.19.4 Non-terminated Pods: (13 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-f9fd979d6-wmtvr 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 45h kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45h kube-system ingress-nginx-controller-6f5f4f5cfc-lqtf6 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 46h kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 45h kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 45h kube-system kube-proxy-xhjmq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 45h kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 45h kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d16h kubernetes-dashboard dashboard-metrics-scraper-dc6947fbf-vkg4m 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46h kubernetes-dashboard kubernetes-dashboard-58b79879c5-fzrvg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 46h smt-local app-87494ff59-b4m45 250m (12%) 500m (25%) 0 (0%) 0 (0%) 32m smt-local app-87494ff59-b6p2h 250m (12%) 500m (25%) 0 (0%) 0 (0%) 32m smt-local app-87494ff59-brwzl 250m (12%) 500m (25%) 0 (0%) 0 (0%) 32m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1500m (75%) 1500m (75%) memory 160Mi (4%) 170Mi (4%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 20m kubelet Starting kubelet. Normal NodeAllocatableEnforced 20m kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 20m (x8 over 20m) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 20m (x8 over 20m) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 20m (x7 over 20m) kubelet Node minikube status is now: NodeHasSufficientPID Normal Starting 18m kube-proxy Starting kube-proxy. ==> dmesg <== [Dec 5 03:12] ERROR: earlyprintk= earlyser already used [ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.000000] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20180810/tbprint-177) [ +0.000000] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184) [ +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620) [ +0.010386] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +2.305680] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument [ +0.006359] systemd-fstab-generator[1103]: Ignoring "noauto" for root device [ +0.006084] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [ +1.057963] vboxguest: loading out-of-tree module taints kernel. [ +0.005862] vboxguest: PCI device not found, probably running on physical hardware. [ +0.004984] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [ +27.214289] systemd-fstab-generator[1856]: Ignoring "noauto" for root device [ +0.117259] systemd-fstab-generator[1866]: Ignoring "noauto" for root device [Dec 5 03:13] systemd-fstab-generator[2100]: Ignoring "noauto" for root device [ +2.787927] systemd-fstab-generator[2296]: Ignoring "noauto" for root device [ +8.575922] kauditd_printk_skb: 107 callbacks suppressed [Dec 5 03:14] kauditd_printk_skb: 41 callbacks suppressed [ +18.980464] NFSD: Unable to end grace period: -110 [ +14.532957] kauditd_printk_skb: 17 callbacks suppressed [Dec 5 03:15] kauditd_printk_skb: 38 callbacks suppressed [Dec 5 03:18] kauditd_printk_skb: 8 callbacks suppressed ==> etcd [366c0897a1e8] <== 2020-12-05 03:03:50.920389 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:04:00.920614 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:04:10.920192 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:04:20.920162 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:04:30.919831 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:04:40.919714 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:04:50.920286 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:05:00.919920 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:05:10.919869 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:05:20.919721 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:05:30.919966 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:05:38.995452 I | mvcc: store.index: compact 690279 2020-12-05 03:05:38.996659 I | mvcc: finished scheduled compaction at 690279 (took 523.657Β΅s) 2020-12-05 03:05:40.920152 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:05:50.920117 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:06:00.919695 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:06:02.012809 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/ingress-controller-leader-nginx\" " with result "range_response_count:1 size:611" took too long (149.828163ms) to execute 2020-12-05 03:06:02.013467 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:6" took too long (149.673239ms) to execute 2020-12-05 03:06:10.919566 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:06:20.919928 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:06:30.920178 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:06:40.920223 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:06:50.920059 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:07:00.920602 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:07:10.920404 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:07:20.920454 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:07:30.919730 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:07:40.920619 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:07:50.919802 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:08:00.920308 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:08:10.920348 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:08:20.919834 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:08:30.921081 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:08:40.921660 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:08:50.920567 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:09:00.920168 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:09:10.919789 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:09:20.920109 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:09:30.920339 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:09:40.161355 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1111" took too long (191.530546ms) to execute 2020-12-05 03:09:40.919810 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:09:50.919410 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:10:00.920049 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:10:10.368651 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (128.479311ms) to execute 2020-12-05 03:10:10.920156 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:10:20.919934 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:10:30.919760 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:10:39.004508 I | mvcc: store.index: compact 690578 2020-12-05 03:10:39.006508 I | mvcc: finished scheduled compaction at 690578 (took 1.298756ms) 2020-12-05 03:10:40.919424 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:10:50.920054 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:11:00.920023 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:11:10.920441 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:11:20.919765 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:11:30.919459 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:11:40.451505 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (214.523207ms) to execute 2020-12-05 03:11:40.920278 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:11:50.919593 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:12:00.920605 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:12:06.731356 N | pkg/osutil: received terminated signal, shutting down... ==> etcd [a78a0fd30743] <== 2020-12-05 03:24:00.302210 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:24:10.302134 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:24:20.301892 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:24:30.303118 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:24:40.301535 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:24:50.302706 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:25:00.303844 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:25:10.301981 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:25:20.302814 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:25:30.302552 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:25:40.301775 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:25:50.302129 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:26:00.302123 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:26:10.302017 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:26:20.302848 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:26:30.303819 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:26:40.302828 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:26:50.302737 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:27:00.302405 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:27:10.303628 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:27:20.305085 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:27:30.303867 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:27:40.302244 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:27:50.301627 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:28:00.302132 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:28:10.304679 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:28:20.302141 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:28:30.302377 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:28:40.301595 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:28:50.179489 I | mvcc: store.index: compact 691593 2020-12-05 03:28:50.180396 I | mvcc: finished scheduled compaction at 691593 (took 456.616Β΅s) 2020-12-05 03:28:50.301663 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:29:00.302869 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:29:10.301768 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:29:20.306674 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:29:30.302933 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:29:40.305914 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:29:50.301341 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:30:00.304986 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:30:10.309215 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:30:20.302689 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:30:30.302522 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:30:40.301402 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:30:50.302961 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:31:00.301685 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:31:10.302040 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:31:20.301397 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:31:30.302068 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:31:40.303600 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:31:50.301823 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:32:00.304173 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:32:10.303585 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:32:20.304957 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:32:30.301811 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:32:40.302004 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:32:50.302365 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:33:00.303885 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:33:10.301642 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:33:20.302158 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-05 03:33:30.301610 I | etcdserver/api/etcdhttp: /health OK (status code 200) ==> kernel <== 03:33:36 up 21 min, 0 users, load average: 0.32, 0.35, 0.43 Linux minikube 4.19.107 #1 SMP Thu May 28 15:07:17 PDT 2020 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2019.02.10" ==> kube-apiserver [5499ec0e52cf] <== I1205 03:21:12.125510 1 client.go:360] parsed scheme: "passthrough" I1205 03:21:12.125676 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:21:12.125705 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:21:44.871842 1 client.go:360] parsed scheme: "passthrough" I1205 03:21:44.871962 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:21:44.871982 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:22:27.872111 1 client.go:360] parsed scheme: "passthrough" I1205 03:22:27.872205 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:22:27.872221 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:23:05.800570 1 client.go:360] parsed scheme: "passthrough" I1205 03:23:05.800688 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:23:05.800740 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:23:43.111287 1 client.go:360] parsed scheme: "passthrough" I1205 03:23:43.111634 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:23:43.111670 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:24:22.460755 1 client.go:360] parsed scheme: "passthrough" I1205 03:24:22.460883 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:24:22.460901 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:24:58.130649 1 client.go:360] parsed scheme: "passthrough" I1205 03:24:58.130731 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:24:58.130744 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:25:36.732691 1 client.go:360] parsed scheme: "passthrough" I1205 03:25:36.732743 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:25:36.732754 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:26:07.873876 1 client.go:360] parsed scheme: "passthrough" I1205 03:26:07.873977 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:26:07.873999 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:26:51.361441 1 client.go:360] parsed scheme: "passthrough" I1205 03:26:51.361590 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:26:51.361649 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:27:28.814720 1 client.go:360] parsed scheme: "passthrough" I1205 03:27:28.814778 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:27:28.814791 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:28:01.978608 1 client.go:360] parsed scheme: "passthrough" I1205 03:28:01.978763 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:28:01.978798 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:28:37.405052 1 client.go:360] parsed scheme: "passthrough" I1205 03:28:37.405183 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:28:37.405200 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:29:12.526625 1 client.go:360] parsed scheme: "passthrough" I1205 03:29:12.526723 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:29:12.526735 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:29:47.116397 1 client.go:360] parsed scheme: "passthrough" I1205 03:29:47.116560 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:29:47.116793 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:30:32.103791 1 client.go:360] parsed scheme: "passthrough" I1205 03:30:32.104019 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:30:32.104135 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:31:13.621780 1 client.go:360] parsed scheme: "passthrough" I1205 03:31:13.621873 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:31:13.621885 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:31:53.113237 1 client.go:360] parsed scheme: "passthrough" I1205 03:31:53.114010 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:31:53.114191 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:32:35.860760 1 client.go:360] parsed scheme: "passthrough" I1205 03:32:35.860845 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:32:35.860860 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I1205 03:33:09.859408 1 client.go:360] parsed scheme: "passthrough" I1205 03:33:09.859488 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1205 03:33:09.859502 1 clientconn.go:948] ClientConn switching balancer to "pick_first" ==> kube-apiserver [f3ee10c11d9e] <== W1205 03:12:06.739448 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.739492 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... I1205 03:12:06.739565 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.739698 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick W1205 03:12:06.740995 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... I1205 03:12:06.783607 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.783902 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick W1205 03:12:06.784033 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... I1205 03:12:06.784167 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.784274 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.784368 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.784465 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.784590 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.784682 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.784776 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.784873 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.784984 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.785439 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.785559 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.785654 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.785748 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.785842 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.785990 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.786091 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.786183 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.786275 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.786468 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick W1205 03:12:06.788524 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.788580 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.788622 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.788664 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.788708 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.788761 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.788806 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.788851 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.788895 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.788939 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.788985 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.789136 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.789187 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.789271 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.789320 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.789367 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.789416 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.789466 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.789517 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.789599 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.789650 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.789699 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... W1205 03:12:06.789750 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting... I1205 03:12:06.790189 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.790416 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.790551 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.790669 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.790763 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.790856 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.791562 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.791709 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.792501 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick I1205 03:12:06.792980 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick ==> kube-controller-manager [3745203f95a6] <== I1205 02:05:50.015262 1 shared_informer.go:247] Caches are synced for expand I1205 02:05:50.015759 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1205 02:05:50.017086 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1205 02:05:50.017270 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1205 02:05:50.045587 1 shared_informer.go:247] Caches are synced for TTL I1205 02:05:50.062572 1 shared_informer.go:247] Caches are synced for GC I1205 02:05:50.064067 1 shared_informer.go:247] Caches are synced for endpoint I1205 02:05:50.065748 1 shared_informer.go:247] Caches are synced for ReplicationController I1205 02:05:50.065823 1 shared_informer.go:247] Caches are synced for HPA I1205 02:05:50.067218 1 shared_informer.go:247] Caches are synced for disruption I1205 02:05:50.067258 1 disruption.go:339] Sending events to api server. I1205 02:05:50.067461 1 shared_informer.go:247] Caches are synced for PV protection I1205 02:05:50.076393 1 shared_informer.go:247] Caches are synced for stateful set I1205 02:05:50.078277 1 shared_informer.go:247] Caches are synced for persistent volume I1205 02:05:50.084716 1 shared_informer.go:247] Caches are synced for PVC protection I1205 02:05:50.095228 1 shared_informer.go:247] Caches are synced for taint I1205 02:05:50.096099 1 taint_manager.go:187] Starting NoExecuteTaintManager I1205 02:05:50.096663 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W1205 02:05:50.097284 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. I1205 02:05:50.097716 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal. I1205 02:05:50.098087 1 shared_informer.go:247] Caches are synced for attach detach I1205 02:05:50.107368 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I1205 02:05:50.107475 1 shared_informer.go:247] Caches are synced for deployment I1205 02:05:50.142497 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1205 02:05:50.143105 1 shared_informer.go:247] Caches are synced for daemon sets I1205 02:05:50.143420 1 shared_informer.go:247] Caches are synced for ReplicaSet I1205 02:05:50.144679 1 shared_informer.go:247] Caches are synced for endpoint_slice I1205 02:05:50.176151 1 shared_informer.go:247] Caches are synced for job I1205 02:05:50.267723 1 shared_informer.go:247] Caches are synced for resource quota I1205 02:05:50.514010 1 request.go:645] Throttling request took 1.048729828s, request: GET:https://192.168.64.5:8443/apis/networking.k8s.io/v1beta1?timeout=32s I1205 02:05:50.519067 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1205 02:05:50.564106 1 shared_informer.go:247] Caches are synced for garbage collector I1205 02:05:50.565591 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I1205 02:05:50.619461 1 shared_informer.go:247] Caches are synced for garbage collector I1205 02:05:51.215619 1 shared_informer.go:240] Waiting for caches to sync for resource quota I1205 02:05:51.215652 1 shared_informer.go:247] Caches are synced for resource quota I1205 02:18:15.295242 1 event.go:291] "Event occurred" object="smt-local/app" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set app-87494ff59 to 3" I1205 02:18:15.318755 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-67vf7" I1205 02:18:15.349441 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-pwh7g" I1205 02:18:15.349515 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-kwxln" I1205 02:21:11.638190 1 event.go:291] "Event occurred" object="smt-local/code-pv-claim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator" I1205 02:21:17.115436 1 event.go:291] "Event occurred" object="smt-local/mysql-pv-claim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator" I1205 02:21:20.443491 1 event.go:291] "Event occurred" object="smt-local/redis-pv-claim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator" I1205 02:21:31.778034 1 event.go:291] "Event occurred" object="smt-local/app" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set app-87494ff59 to 3" I1205 02:21:31.804403 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-gbsmg" I1205 02:21:31.833898 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-bppss" I1205 02:21:31.834604 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-z6qwc" I1205 02:54:52.585417 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-bw52x" I1205 02:55:16.805512 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-vcfrr" I1205 02:55:29.733471 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-2ghzv" I1205 02:55:39.015162 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-ltkqs" I1205 03:00:21.288293 1 event.go:291] "Event occurred" object="smt-local/code-pv-claim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator" I1205 03:00:55.452629 1 event.go:291] "Event occurred" object="smt-local/mysql-pv-claim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator" I1205 03:00:55.453237 1 event.go:291] "Event occurred" object="smt-local/mysql-pv-claim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator" I1205 03:01:00.967959 1 event.go:291] "Event occurred" object="smt-local/redis-pv-claim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator" I1205 03:01:00.968319 1 event.go:291] "Event occurred" object="smt-local/redis-pv-claim" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator" I1205 03:01:07.254043 1 event.go:291] "Event occurred" object="smt-local/app" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set app-87494ff59 to 3" I1205 03:01:07.285716 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-brwzl" I1205 03:01:07.339375 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-b6p2h" I1205 03:01:07.340643 1 event.go:291] "Event occurred" object="smt-local/app-87494ff59" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: app-87494ff59-b4m45" ==> kube-controller-manager [af23062ba6e0] <== I1205 03:14:01.612157 1 controllermanager.go:549] Started "endpointslicemirroring" I1205 03:14:01.630369 1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller I1205 03:14:01.630415 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring I1205 03:14:01.761213 1 controllermanager.go:549] Started "deployment" I1205 03:14:01.761243 1 deployment_controller.go:153] Starting deployment controller I1205 03:14:01.761666 1 shared_informer.go:240] Waiting for caches to sync for deployment I1205 03:14:01.911842 1 controllermanager.go:549] Started "csrapproving" I1205 03:14:01.911981 1 certificate_controller.go:118] Starting certificate controller "csrapproving" I1205 03:14:01.912384 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving I1205 03:14:02.060602 1 node_lifecycle_controller.go:77] Sending events to api server E1205 03:14:02.061063 1 core.go:230] failed to start cloud node lifecycle controller: no cloud provider provided W1205 03:14:02.061187 1 controllermanager.go:541] Skipping "cloud-node-lifecycle" I1205 03:14:02.212249 1 controllermanager.go:549] Started "persistentvolume-expander" I1205 03:14:02.212385 1 expand_controller.go:303] Starting expand controller I1205 03:14:02.212962 1 shared_informer.go:240] Waiting for caches to sync for expand I1205 03:14:02.360605 1 controllermanager.go:549] Started "pv-protection" I1205 03:14:02.360888 1 pv_protection_controller.go:83] Starting PV protection controller I1205 03:14:02.360901 1 shared_informer.go:240] Waiting for caches to sync for PV protection I1205 03:14:02.388597 1 shared_informer.go:240] Waiting for caches to sync for resource quota I1205 03:14:02.451356 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1205 03:14:02.451447 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1205 03:14:02.451461 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1205 03:14:02.451501 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1205 03:14:02.474709 1 shared_informer.go:247] Caches are synced for TTL W1205 03:14:02.474913 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist I1205 03:14:02.475656 1 shared_informer.go:247] Caches are synced for PV protection I1205 03:14:02.476039 1 shared_informer.go:247] Caches are synced for namespace I1205 03:14:02.495640 1 shared_informer.go:247] Caches are synced for HPA I1205 03:14:02.511863 1 shared_informer.go:247] Caches are synced for service account I1205 03:14:02.511921 1 shared_informer.go:247] Caches are synced for attach detach I1205 03:14:02.512443 1 shared_informer.go:247] Caches are synced for job I1205 03:14:02.558972 1 shared_informer.go:247] Caches are synced for endpoint_slice I1205 03:14:02.559395 1 shared_informer.go:247] Caches are synced for expand I1205 03:14:02.559424 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1205 03:14:02.559551 1 shared_informer.go:247] Caches are synced for GC I1205 03:14:02.558972 1 shared_informer.go:247] Caches are synced for daemon sets I1205 03:14:02.558989 1 shared_informer.go:247] Caches are synced for PVC protection I1205 03:14:02.560844 1 shared_informer.go:247] Caches are synced for stateful set I1205 03:14:02.564604 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I1205 03:14:02.564631 1 shared_informer.go:247] Caches are synced for taint I1205 03:14:02.564659 1 shared_informer.go:247] Caches are synced for bootstrap_signer I1205 03:14:02.564773 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1205 03:14:02.565682 1 shared_informer.go:247] Caches are synced for persistent volume I1205 03:14:02.565866 1 shared_informer.go:247] Caches are synced for ReplicationController I1205 03:14:02.565933 1 taint_manager.go:187] Starting NoExecuteTaintManager I1205 03:14:02.566236 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W1205 03:14:02.568355 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. I1205 03:14:02.568452 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal. I1205 03:14:02.571368 1 shared_informer.go:247] Caches are synced for deployment I1205 03:14:02.596212 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I1205 03:14:02.596245 1 shared_informer.go:247] Caches are synced for ReplicaSet I1205 03:14:02.606951 1 shared_informer.go:247] Caches are synced for endpoint I1205 03:14:02.689626 1 shared_informer.go:247] Caches are synced for resource quota I1205 03:14:02.691139 1 shared_informer.go:247] Caches are synced for resource quota I1205 03:14:02.709505 1 shared_informer.go:247] Caches are synced for disruption I1205 03:14:02.709595 1 disruption.go:339] Sending events to api server. I1205 03:14:02.791593 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1205 03:14:02.991898 1 shared_informer.go:247] Caches are synced for garbage collector I1205 03:14:03.011611 1 shared_informer.go:247] Caches are synced for garbage collector I1205 03:14:03.011790 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage ==> kube-proxy [941f7adaacb9] <== I1205 03:14:46.132221 1 node.go:136] Successfully retrieved node IP: 192.168.64.5 I1205 03:14:46.132924 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.64.5), assume IPv4 operation W1205 03:14:47.397377 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy I1205 03:14:47.397869 1 server_others.go:186] Using iptables Proxier. W1205 03:14:47.398120 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I1205 03:14:47.398183 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I1205 03:14:47.398887 1 server.go:650] Version: v1.19.4 I1205 03:14:47.443536 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I1205 03:14:47.443849 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I1205 03:14:47.444688 1 conntrack.go:83] Setting conntrack hashsize to 32768 I1205 03:14:47.449701 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1205 03:14:47.449970 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1205 03:14:47.486975 1 config.go:315] Starting service config controller I1205 03:14:47.487511 1 shared_informer.go:240] Waiting for caches to sync for service config I1205 03:14:47.508782 1 config.go:224] Starting endpoint slice config controller I1205 03:14:47.509166 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I1205 03:14:47.687840 1 shared_informer.go:247] Caches are synced for service config I1205 03:14:47.710126 1 shared_informer.go:247] Caches are synced for endpoint slice config ==> kube-proxy [ddc5c4dc8d1f] <== I1205 02:06:28.554539 1 node.go:136] Successfully retrieved node IP: 192.168.64.5 I1205 02:06:28.554933 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.64.5), assume IPv4 operation W1205 02:06:34.207899 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy I1205 02:06:34.208061 1 server_others.go:186] Using iptables Proxier. W1205 02:06:34.208156 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined I1205 02:06:34.208166 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local I1205 02:06:34.208553 1 server.go:650] Version: v1.19.4 I1205 02:06:34.209238 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I1205 02:06:34.209321 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I1205 02:06:34.209759 1 conntrack.go:83] Setting conntrack hashsize to 32768 I1205 02:06:34.215090 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1205 02:06:34.215185 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1205 02:06:34.509273 1 config.go:315] Starting service config controller I1205 02:06:34.509436 1 shared_informer.go:240] Waiting for caches to sync for service config I1205 02:06:34.509527 1 config.go:224] Starting endpoint slice config controller I1205 02:06:34.509536 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I1205 02:06:34.709988 1 shared_informer.go:247] Caches are synced for endpoint slice config I1205 02:06:34.710079 1 shared_informer.go:247] Caches are synced for service config ==> kube-scheduler [6016cd133098] <== W1205 02:05:26.903328 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1205 02:05:27.064547 1 registry.go:173] Registering SelectorSpread plugin I1205 02:05:27.064995 1 registry.go:173] Registering SelectorSpread plugin I1205 02:05:27.293160 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1205 02:05:27.293602 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1205 02:05:27.360546 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I1205 02:05:27.360975 1 tlsconfig.go:240] Starting DynamicServingCertificateController E1205 02:05:27.362259 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.64.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:27.417415 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:27.419409 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.64.5:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:27.446214 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.64.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:27.446677 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.5:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:27.446984 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.64.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:27.447329 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.64.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:27.447738 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.64.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:27.447972 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.64.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:27.448276 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.64.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:27.448674 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.64.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:27.448904 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.64.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:27.449118 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:28.276367 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.64.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:28.295187 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.5:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:28.384049 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.64.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:28.387101 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.64.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:28.502383 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.64.5:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:28.520205 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.64.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:28.553565 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.64.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:28.832336 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.64.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:28.872894 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:28.910175 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.64.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:28.942693 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:29.011146 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.64.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:29.035130 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.64.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:30.252223 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.64.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:30.403736 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.64.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:31.021242 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:31.142484 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.5:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:31.144470 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.64.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:31.342506 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:31.353305 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.64.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:31.383512 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.64.5:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:31.421557 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.64.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:31.567567 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.64.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:31.860718 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.64.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:32.085817 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.64.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:32.107642 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.64.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 02:05:44.374726 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1205 02:05:44.375101 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1205 02:05:44.375276 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1205 02:05:44.375113 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1205 02:05:44.375723 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1205 02:05:44.375986 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1205 02:05:44.376194 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1205 02:05:44.376341 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1205 02:05:44.376754 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1205 02:05:44.376757 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1205 02:05:44.377263 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1205 02:05:44.377397 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1205 02:05:44.377320 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope I1205 02:05:50.893939 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kube-scheduler [bf546b76fd6c] <== E1205 03:13:36.596399 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.64.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:36.596889 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.64.5:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:36.597147 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.64.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:36.619215 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.64.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:37.330660 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.64.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:37.414372 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.64.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:37.438434 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.64.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:37.446590 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.64.5:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:37.465640 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.64.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:37.471228 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:37.663925 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.5:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:37.678107 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.64.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:37.717459 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.64.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:37.753177 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.64.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:37.829040 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.64.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:37.896229 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:38.148770 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.64.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:39.435802 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.64.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:39.467997 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.64.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:39.471967 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.64.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:39.920727 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.64.5:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:39.943504 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:40.087741 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.5:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:40.250626 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:40.625767 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.64.5:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:40.680681 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.64.5:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:40.803526 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.64.5:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:40.807030 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.64.5:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:40.892061 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.64.5:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused E1205 03:13:40.991816 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.64.5:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.64.5:8443: connect: connection refused I1205 03:13:53.384676 1 trace.go:205] Trace[2123559801]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2020 03:13:43.383) (total time: 10001ms): Trace[2123559801]: [10.001096147s] [10.001096147s] END E1205 03:13:53.384747 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.64.5:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout I1205 03:13:53.944790 1 trace.go:205] Trace[1821972470]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (05-Dec-2020 03:13:43.943) (total time: 10001ms): Trace[1821972470]: [10.001187585s] [10.001187585s] END E1205 03:13:53.944873 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.64.5:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout I1205 03:13:54.296681 1 trace.go:205] Trace[278774705]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2020 03:13:44.295) (total time: 10001ms): Trace[278774705]: [10.001076578s] [10.001076578s] END E1205 03:13:54.296736 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.64.5:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": net/http: TLS handshake timeout I1205 03:13:54.549613 1 trace.go:205] Trace[2017424352]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2020 03:13:44.548) (total time: 10001ms): Trace[2017424352]: [10.001255141s] [10.001255141s] END E1205 03:13:54.549668 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.64.5:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": net/http: TLS handshake timeout I1205 03:13:54.961544 1 trace.go:205] Trace[513630654]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2020 03:13:44.960) (total time: 10001ms): Trace[513630654]: [10.001017014s] [10.001017014s] END E1205 03:13:54.961866 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.64.5:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout I1205 03:13:55.150756 1 trace.go:205] Trace[1184184674]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2020 03:13:45.149) (total time: 10001ms): Trace[1184184674]: [10.001541982s] [10.001541982s] END E1205 03:13:55.150906 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.64.5:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": net/http: TLS handshake timeout I1205 03:13:55.525378 1 trace.go:205] Trace[940620469]: "Reflector ListAndWatch" name:k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188 (05-Dec-2020 03:13:45.523) (total time: 10001ms): Trace[940620469]: [10.001374516s] [10.001374516s] END E1205 03:13:55.525565 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.5:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": net/http: TLS handshake timeout I1205 03:13:55.633913 1 trace.go:205] Trace[1738256694]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (05-Dec-2020 03:13:45.632) (total time: 10001ms): Trace[1738256694]: [10.001040711s] [10.001040711s] END E1205 03:13:55.634224 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.64.5:8443/api/v1/pods?limit=500&resourceVersion=0": net/http: TLS handshake timeout E1205 03:13:55.828235 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1205 03:13:55.828727 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1205 03:13:55.828790 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1205 03:13:55.828864 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1205 03:13:55.829176 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope I1205 03:14:03.240473 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== -- Logs begin at Sat 2020-12-05 03:12:29 UTC, end at Sat 2020-12-05 03:33:43 UTC. -- Dec 05 03:16:21 minikube kubelet[2304]: I1205 03:16:21.487149 2304 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b1505a439b244b1a056be94926454398a7db4a4ba5203947acaa7e2f24a67b35 Dec 05 03:16:21 minikube kubelet[2304]: E1205 03:16:21.488368 2304 pod_workers.go:191] Error syncing pod 8e38ae9e-1c28-4a51-a8fa-197cbd44bcef ("kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)" Dec 05 03:16:34 minikube kubelet[2304]: I1205 03:16:34.487415 2304 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b1505a439b244b1a056be94926454398a7db4a4ba5203947acaa7e2f24a67b35 Dec 05 03:16:34 minikube kubelet[2304]: E1205 03:16:34.487798 2304 pod_workers.go:191] Error syncing pod 8e38ae9e-1c28-4a51-a8fa-197cbd44bcef ("kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)" Dec 05 03:16:45 minikube kubelet[2304]: I1205 03:16:45.486867 2304 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b1505a439b244b1a056be94926454398a7db4a4ba5203947acaa7e2f24a67b35 Dec 05 03:16:45 minikube kubelet[2304]: E1205 03:16:45.487529 2304 pod_workers.go:191] Error syncing pod 8e38ae9e-1c28-4a51-a8fa-197cbd44bcef ("kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)" Dec 05 03:17:00 minikube kubelet[2304]: I1205 03:17:00.488367 2304 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b1505a439b244b1a056be94926454398a7db4a4ba5203947acaa7e2f24a67b35 Dec 05 03:17:00 minikube kubelet[2304]: E1205 03:17:00.490121 2304 pod_workers.go:191] Error syncing pod 8e38ae9e-1c28-4a51-a8fa-197cbd44bcef ("kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)" Dec 05 03:17:14 minikube kubelet[2304]: I1205 03:17:14.486568 2304 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b1505a439b244b1a056be94926454398a7db4a4ba5203947acaa7e2f24a67b35 Dec 05 03:17:14 minikube kubelet[2304]: E1205 03:17:14.486983 2304 pod_workers.go:191] Error syncing pod 8e38ae9e-1c28-4a51-a8fa-197cbd44bcef ("kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)" Dec 05 03:17:27 minikube kubelet[2304]: I1205 03:17:27.486671 2304 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b1505a439b244b1a056be94926454398a7db4a4ba5203947acaa7e2f24a67b35 Dec 05 03:17:27 minikube kubelet[2304]: E1205 03:17:27.487665 2304 pod_workers.go:191] Error syncing pod 8e38ae9e-1c28-4a51-a8fa-197cbd44bcef ("kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)" Dec 05 03:17:39 minikube kubelet[2304]: I1205 03:17:39.486823 2304 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b1505a439b244b1a056be94926454398a7db4a4ba5203947acaa7e2f24a67b35 Dec 05 03:17:39 minikube kubelet[2304]: E1205 03:17:39.487842 2304 pod_workers.go:191] Error syncing pod 8e38ae9e-1c28-4a51-a8fa-197cbd44bcef ("kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)" Dec 05 03:17:50 minikube kubelet[2304]: I1205 03:17:50.486752 2304 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b1505a439b244b1a056be94926454398a7db4a4ba5203947acaa7e2f24a67b35 Dec 05 03:17:50 minikube kubelet[2304]: E1205 03:17:50.487255 2304 pod_workers.go:191] Error syncing pod 8e38ae9e-1c28-4a51-a8fa-197cbd44bcef ("kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 2m40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-58b79879c5-fzrvg_kubernetes-dashboard(8e38ae9e-1c28-4a51-a8fa-197cbd44bcef)" Dec 05 03:18:02 minikube kubelet[2304]: I1205 03:18:02.487475 2304 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b1505a439b244b1a056be94926454398a7db4a4ba5203947acaa7e2f24a67b35 Dec 05 03:18:03 minikube kubelet[2304]: W1205 03:18:03.261920 2304 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-58b79879c5-fzrvg through plugin: invalid network status for Dec 05 03:18:32 minikube kubelet[2304]: E1205 03:18:32.487700 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)": unmounted volumes=[nginx-config], unattached volumes=[nginx-config mysql-persistent-storage code-storage default-token-cp4nw]: timed out waiting for the condition; skipping pod Dec 05 03:18:32 minikube kubelet[2304]: E1205 03:18:32.487765 2304 pod_workers.go:191] Error syncing pod 6fd0573c-72c6-4ad5-9c6a-1444c8f25687 ("app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[nginx-config mysql-persistent-storage code-storage default-token-cp4nw]: timed out waiting for the condition Dec 05 03:18:32 minikube kubelet[2304]: E1205 03:18:32.488442 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)": unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition; skipping pod Dec 05 03:18:32 minikube kubelet[2304]: E1205 03:18:32.488465 2304 pod_workers.go:191] Error syncing pod 0156a65d-3d92-434a-985b-e8f10011b243 ("app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition Dec 05 03:18:33 minikube kubelet[2304]: E1205 03:18:33.487663 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)": unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition; skipping pod Dec 05 03:18:33 minikube kubelet[2304]: E1205 03:18:33.488042 2304 pod_workers.go:191] Error syncing pod e7590034-aad7-4302-9cd2-78b5deea022c ("app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition Dec 05 03:20:48 minikube kubelet[2304]: E1205 03:20:48.488626 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)": unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition; skipping pod Dec 05 03:20:48 minikube kubelet[2304]: E1205 03:20:48.488776 2304 pod_workers.go:191] Error syncing pod e7590034-aad7-4302-9cd2-78b5deea022c ("app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition Dec 05 03:20:48 minikube kubelet[2304]: E1205 03:20:48.489362 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)": unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition; skipping pod Dec 05 03:20:48 minikube kubelet[2304]: E1205 03:20:48.489404 2304 pod_workers.go:191] Error syncing pod 0156a65d-3d92-434a-985b-e8f10011b243 ("app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition Dec 05 03:20:50 minikube kubelet[2304]: E1205 03:20:50.487135 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)": unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition; skipping pod Dec 05 03:20:50 minikube kubelet[2304]: E1205 03:20:50.488242 2304 pod_workers.go:191] Error syncing pod 6fd0573c-72c6-4ad5-9c6a-1444c8f25687 ("app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition Dec 05 03:23:04 minikube kubelet[2304]: E1205 03:23:04.488121 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)": unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition; skipping pod Dec 05 03:23:04 minikube kubelet[2304]: E1205 03:23:04.488240 2304 pod_workers.go:191] Error syncing pod 0156a65d-3d92-434a-985b-e8f10011b243 ("app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition Dec 05 03:23:05 minikube kubelet[2304]: E1205 03:23:05.487810 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)": unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition; skipping pod Dec 05 03:23:05 minikube kubelet[2304]: E1205 03:23:05.487925 2304 pod_workers.go:191] Error syncing pod 6fd0573c-72c6-4ad5-9c6a-1444c8f25687 ("app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition Dec 05 03:23:06 minikube kubelet[2304]: E1205 03:23:06.488549 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)": unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition; skipping pod Dec 05 03:23:06 minikube kubelet[2304]: E1205 03:23:06.488646 2304 pod_workers.go:191] Error syncing pod e7590034-aad7-4302-9cd2-78b5deea022c ("app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition Dec 05 03:25:19 minikube kubelet[2304]: E1205 03:25:19.487653 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)": unmounted volumes=[nginx-config], unattached volumes=[nginx-config mysql-persistent-storage code-storage default-token-cp4nw]: timed out waiting for the condition; skipping pod Dec 05 03:25:19 minikube kubelet[2304]: E1205 03:25:19.487791 2304 pod_workers.go:191] Error syncing pod 6fd0573c-72c6-4ad5-9c6a-1444c8f25687 ("app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[nginx-config mysql-persistent-storage code-storage default-token-cp4nw]: timed out waiting for the condition Dec 05 03:25:21 minikube kubelet[2304]: E1205 03:25:21.487554 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)": unmounted volumes=[nginx-config], unattached volumes=[nginx-config mysql-persistent-storage code-storage default-token-cp4nw]: timed out waiting for the condition; skipping pod Dec 05 03:25:21 minikube kubelet[2304]: E1205 03:25:21.487653 2304 pod_workers.go:191] Error syncing pod 0156a65d-3d92-434a-985b-e8f10011b243 ("app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[nginx-config mysql-persistent-storage code-storage default-token-cp4nw]: timed out waiting for the condition Dec 05 03:25:23 minikube kubelet[2304]: E1205 03:25:23.487592 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)": unmounted volumes=[nginx-config], unattached volumes=[default-token-cp4nw nginx-config mysql-persistent-storage code-storage]: timed out waiting for the condition; skipping pod Dec 05 03:25:23 minikube kubelet[2304]: E1205 03:25:23.488250 2304 pod_workers.go:191] Error syncing pod e7590034-aad7-4302-9cd2-78b5deea022c ("app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[default-token-cp4nw nginx-config mysql-persistent-storage code-storage]: timed out waiting for the condition Dec 05 03:27:33 minikube kubelet[2304]: E1205 03:27:33.488414 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)": unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition; skipping pod Dec 05 03:27:33 minikube kubelet[2304]: E1205 03:27:33.488542 2304 pod_workers.go:191] Error syncing pod 6fd0573c-72c6-4ad5-9c6a-1444c8f25687 ("app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition Dec 05 03:27:37 minikube kubelet[2304]: E1205 03:27:37.488310 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)": unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition; skipping pod Dec 05 03:27:37 minikube kubelet[2304]: E1205 03:27:37.489273 2304 pod_workers.go:191] Error syncing pod e7590034-aad7-4302-9cd2-78b5deea022c ("app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition Dec 05 03:27:39 minikube kubelet[2304]: E1205 03:27:39.488107 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)": unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition; skipping pod Dec 05 03:27:39 minikube kubelet[2304]: E1205 03:27:39.488204 2304 pod_workers.go:191] Error syncing pod 0156a65d-3d92-434a-985b-e8f10011b243 ("app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition Dec 05 03:29:51 minikube kubelet[2304]: E1205 03:29:51.488657 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)": unmounted volumes=[nginx-config], unattached volumes=[nginx-config mysql-persistent-storage code-storage default-token-cp4nw]: timed out waiting for the condition; skipping pod Dec 05 03:29:51 minikube kubelet[2304]: E1205 03:29:51.488736 2304 pod_workers.go:191] Error syncing pod 6fd0573c-72c6-4ad5-9c6a-1444c8f25687 ("app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[nginx-config mysql-persistent-storage code-storage default-token-cp4nw]: timed out waiting for the condition Dec 05 03:29:51 minikube kubelet[2304]: E1205 03:29:51.488657 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)": unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition; skipping pod Dec 05 03:29:51 minikube kubelet[2304]: E1205 03:29:51.488921 2304 pod_workers.go:191] Error syncing pod e7590034-aad7-4302-9cd2-78b5deea022c ("app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition Dec 05 03:29:56 minikube kubelet[2304]: E1205 03:29:56.488065 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)": unmounted volumes=[nginx-config], unattached volumes=[nginx-config mysql-persistent-storage code-storage default-token-cp4nw]: timed out waiting for the condition; skipping pod Dec 05 03:29:56 minikube kubelet[2304]: E1205 03:29:56.488957 2304 pod_workers.go:191] Error syncing pod 0156a65d-3d92-434a-985b-e8f10011b243 ("app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[nginx-config mysql-persistent-storage code-storage default-token-cp4nw]: timed out waiting for the condition Dec 05 03:32:05 minikube kubelet[2304]: E1205 03:32:05.488652 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)": unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition; skipping pod Dec 05 03:32:05 minikube kubelet[2304]: E1205 03:32:05.488757 2304 pod_workers.go:191] Error syncing pod 6fd0573c-72c6-4ad5-9c6a-1444c8f25687 ("app-87494ff59-b4m45_smt-local(6fd0573c-72c6-4ad5-9c6a-1444c8f25687)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[mysql-persistent-storage code-storage default-token-cp4nw nginx-config]: timed out waiting for the condition Dec 05 03:32:06 minikube kubelet[2304]: E1205 03:32:06.487701 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)": unmounted volumes=[nginx-config], unattached volumes=[default-token-cp4nw nginx-config mysql-persistent-storage code-storage]: timed out waiting for the condition; skipping pod Dec 05 03:32:06 minikube kubelet[2304]: E1205 03:32:06.487800 2304 pod_workers.go:191] Error syncing pod e7590034-aad7-4302-9cd2-78b5deea022c ("app-87494ff59-b6p2h_smt-local(e7590034-aad7-4302-9cd2-78b5deea022c)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[default-token-cp4nw nginx-config mysql-persistent-storage code-storage]: timed out waiting for the condition Dec 05 03:32:13 minikube kubelet[2304]: E1205 03:32:13.487797 2304 kubelet.go:1594] Unable to attach or mount volumes for pod "app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)": unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition; skipping pod Dec 05 03:32:13 minikube kubelet[2304]: E1205 03:32:13.487903 2304 pod_workers.go:191] Error syncing pod 0156a65d-3d92-434a-985b-e8f10011b243 ("app-87494ff59-brwzl_smt-local(0156a65d-3d92-434a-985b-e8f10011b243)"), skipping: unmounted volumes=[nginx-config], unattached volumes=[code-storage default-token-cp4nw nginx-config mysql-persistent-storage]: timed out waiting for the condition ==> kubernetes-dashboard [31e7e0f33da8] <== 2020/12/05 03:18:02 Using namespace: kubernetes-dashboard 2020/12/05 03:18:02 Using in-cluster config to connect to apiserver 2020/12/05 03:18:02 Using secret token for csrf signing 2020/12/05 03:18:02 Initializing csrf token from kubernetes-dashboard-csrf secret 2020/12/05 03:18:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf 2020/12/05 03:18:02 Successful initial request to the apiserver, version: v1.19.4 2020/12/05 03:18:02 Generating JWE encryption key 2020/12/05 03:18:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting 2020/12/05 03:18:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard 2020/12/05 03:18:03 Initializing JWE encryption key from synchronized object 2020/12/05 03:18:03 Creating in-cluster Sidecar client 2020/12/05 03:18:03 Successful request to sidecar 2020/12/05 03:18:03 Serving insecurely on HTTP port: 9090 2020/12/05 03:18:02 Starting overwatch ==> kubernetes-dashboard [b1505a439b24] <== 2020/12/05 03:14:44 Using namespace: kubernetes-dashboard 2020/12/05 03:14:45 Using in-cluster config to connect to apiserver 2020/12/05 03:14:45 Using secret token for csrf signing 2020/12/05 03:14:45 Initializing csrf token from kubernetes-dashboard-csrf secret 2020/12/05 03:14:44 Starting overwatch panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout goroutine 1 [running]: github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0004c8540) /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446 github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...) /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0001a5580) /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:501 +0xc6 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc0001a5580) /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:469 +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...) /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:550 main.main() /home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x20d ==> storage-provisioner [0d02d92d9e51] <== I1205 03:15:14.884409 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I1205 03:15:32.378947 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I1205 03:15:32.380894 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_2a93b4d3-1e49-45e5-b7d8-77bd0e6fa20d! I1205 03:15:32.412403 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"825f7446-d930-4085-a272-e9821d8795a6", APIVersion:"v1", ResourceVersion:"691150", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_2a93b4d3-1e49-45e5-b7d8-77bd0e6fa20d became leader I1205 03:15:32.481626 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_2a93b4d3-1e49-45e5-b7d8-77bd0e6fa20d! ==> storage-provisioner [d00bd7ee0aaf] <== F1205 03:15:02.078731 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
jasperf commented 3 years ago

Well, on minikube delete, minikube start and minikube addons enable ingress all seemed to start up normally again

minikube start --mount-string="$HOME/code/smt-data:/data"
πŸ˜„  minikube v1.15.1 on Darwin 10.15.7
✨  Using the hyperkit driver based on user configuration
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ”₯  Creating hyperkit VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
πŸ”Ž  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
πŸ„  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
➜  smt-deploy git:(main) βœ— history|grep enable
 1736  minikube addons enable ingress
 6567  minikube addons enable ingress
 6642  minikube addons enable ingress
➜  smt-deploy git:(main) βœ— minikube addons enable ingress
πŸ”Ž  Verifying ingress addon...
🌟  The 'ingress' addon is enabled
robrich commented 3 years ago

I get this warning when using k8s 1.19 and greater, but not on 1.18 and lower because minikube ingress addon uses the old rbac syntax. Not sure I want to worry until the old syntax is removed in a production k8s version and/or the new syntax is supported in all supported k8s versions.

I solved the warning by turning it off and on again:

minikube addons disable ingress
minikube addons enable ingress
thangavel-projects commented 3 years ago

I get this warning when using k8s 1.19 and greater, but not on 1.18 and lower because minikube ingress addon uses the old rbac syntax. Not sure I want to worry until the old syntax is removed in a production k8s version and/or the new syntax is supported in all supported k8s versions.

I solved the warning by turning it off and on again:

minikube addons disable ingress
minikube addons enable ingress

It's strange, i got resolved after disable and enable. Thanks @robrich

cglacet commented 3 years ago

@robinpercy disabling / enabling worked for me too.