Closed springcomefromprogrammingtheworld closed 3 years ago
Hey @springcomefromprogrammingtheworld thanks for opening this issue. Could you please provide the output of
minikube adddons enable ingress --alsologtostderr
Hi @springcomefromprogrammingtheworld, we haven't heard back from you, do you still have this issue? There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.
I will close this issue for now but feel free to reopen when you feel ready to provide more information.
重现问题所需的命令:
失败的命令的完整输出:
🔎 Verifying ingress addon...
❌ Exiting due to MK_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/ingress-configmap.yaml -f /etc/kubernetes/addons/ingress-rbac.yaml -f /etc/kubernetes/addons/ingress-dp.yaml: Process exited with status 1 stdout: configmap/nginx-load-balancer-conf unchanged configmap/tcp-services unchanged configmap/udp-services unchanged serviceaccount/ingress-nginx unchanged serviceaccount/ingress-nginx-admission unchanged clusterrole.rbac.authorization.k8s.io/system::ingress-nginx unchanged role.rbac.authorization.k8s.io/system::ingress-nginx unchanged rolebinding.rbac.authorization.k8s.io/system::ingress-nginx unchanged clusterrolebinding.rbac.authorization.k8s.io/system::ingress-nginx unchanged role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged rolebinding.rbac.authorization.k8s.io/system::ingress-nginx-admission unchanged deployment.apps/ingress-nginx-controller unchanged validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged service/ingress-nginx-controller-admission unchanged
stderr: Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding Warning: admissionregistration.k8s.io/v1beta1 ValidatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration Error from server (Invalid): error when applying patch: {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-create\",\"namespace\":\"kube-system\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-create\"},\"spec\":{\"containers\":[{\"args\":[\"create\",\"--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.kube-system.svc\",\"--namespace=kube-system\",\"--secret-name=ingress-nginx-admission\"],\"image\":\"registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.2.2\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"create\"}],\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"create"}],"containers":[{"image":"registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.2.2","name":"create"}]}}}} to: Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job" Name: "ingress-nginx-admission-create", Namespace: "kube-system" for: "/etc/kubernetes/addons/ingress-dp.yaml": Job.batch "ingress-nginx-admission-create" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-create", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "controller-uid":"a67a50bf-087c-4dda-b465-ec00a89effd7", "job-name":"ingress-nginx-admission-create"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"create", Image:"registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.2.2", Command:[]string(nil), Args:[]string{"create", "--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.kube-system.svc", "--namespace=kube-system", "--secret-name=ingress-nginx-admission"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(core.Probe)(nil), ReadinessProbe:(core.Probe)(nil), StartupProbe:(core.Probe)(nil), Lifecycle:(core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(int64)(0xc00d1c6ef0), ActiveDeadlineSeconds:(int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(bool)(nil), NodeName:"", SecurityContext:(core.PodSecurityContext)(0xc010a6a880), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(bool)(nil), Affinity:(core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(int32)(nil), PreemptionPolicy:(core.PreemptionPolicy)(nil), DNSConfig:(core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable Error from server (Invalid): error when applying patch: {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-patch\",\"namespace\":\"kube-system\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\"},\"name\":\"ingress-nginx-admission-patch\"},\"spec\":{\"containers\":[{\"args\":[\"patch\",\"--webhook-name=ingress-nginx-admission\",\"--namespace=kube-system\",\"--patch-mutating=false\",\"--secret-name=ingress-nginx-admission\",\"--patch-failure-policy=Fail\"],\"image\":\"registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0\",\"imagePullPolicy\":null,\"name\":\"patch\"}],\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"patch"}],"containers":[{"image":"registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0","imagePullPolicy":null,"name":"patch"}]}}}} to: Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job" Name: "ingress-nginx-admission-patch", Namespace: "kube-system" for: "/etc/kubernetes/addons/ingress-dp.yaml": Job.batch "ingress-nginx-admission-patch" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-patch", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "controller-uid":"9e69d0b3-d2a7-4da0-89ab-2d3623fce02a", "job-name":"ingress-nginx-admission-patch"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"patch", Image:"registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0", Command:[]string(nil), Args:[]string{"patch", "--webhook-name=ingress-nginx-admission", "--namespace=kube-system", "--patch-mutating=false", "--secret-name=ingress-nginx-admission", "--patch-failure-policy=Fail"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(core.Probe)(nil), ReadinessProbe:(core.Probe)(nil), StartupProbe:(core.Probe)(nil), Lifecycle:(core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(int64)(0xc00d0e2c20), ActiveDeadlineSeconds:(int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(bool)(nil), NodeName:"", SecurityContext:(core.PodSecurityContext)(0xc00f5bb880), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(bool)(nil), Affinity:(core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(int32)(nil), PreemptionPolicy:(core.PreemptionPolicy)(nil), DNSConfig:(core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil)}}: field is immutable ]
😿 If the above advice does not help, please let us know: 👉 https://github.com/kubernetes/minikube/issues/new/choose
minikube logs
命令的输出:==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID c23fa743ea7bc 85069258b98ac 2 hours ago Running storage-provisioner 15 d6770b341ae64 86cbcacd08b1e 4b26fa2d90ae3 2 hours ago Running controller 3 9d56d268941c8 a3ae92766f7ec 4b26fa2d90ae3 2 hours ago Exited controller 2 9d56d268941c8 a6569795d896e 85069258b98ac 2 hours ago Exited storage-provisioner 14 d6770b341ae64 0ab465a2f9d09 9a07b5b4bfac0 4 hours ago Running kubernetes-dashboard 2 f0f7de7084cf0 51dc2ca7e2329 9a07b5b4bfac0 4 hours ago Exited kubernetes-dashboard 1 f0f7de7084cf0 85d5de380ef97 bfe3a36ebd252 4 hours ago Running coredns 1 69c8d8269e402 978d8ce2fc37c 86262685d9abb 4 hours ago Running dashboard-metrics-scraper 1 907ed17c77322 4696e5f573d1e 10cc881966cfd 4 hours ago Running kube-proxy 1 af2b36c328d69 dca4a631962e1 b9fa1895dcaa6 4 hours ago Running kube-controller-manager 1 f97b18e58d7f7 384b571d20e77 3138b6e3d4712 4 hours ago Running kube-scheduler 1 c440584378591 66576570292bc 0369cf4303ffd 4 hours ago Running etcd 1 a8c5ca14c1721 5fa4a8d2e9da7 ca9843d3b5454 4 hours ago Running kube-apiserver 3 fb95eefb10640 33a2f307b505e 86262685d9abb 4 hours ago Exited dashboard-metrics-scraper 0 aa244de000a78 d6d9332a48d6c 10cc881966cfd 6 hours ago Exited kube-proxy 0 94bc03cdf9dfd 0533aa588b708 bfe3a36ebd252 6 hours ago Exited coredns 0 cd11269e41976 a4ce5e4aa4c54 ca9843d3b5454 6 hours ago Exited kube-apiserver 2 c7828591b0103 e15e5d08eb1f2 b9fa1895dcaa6 6 hours ago Exited kube-controller-manager 0 fc627af7fd31c 6491fc7a9e1ad 3138b6e3d4712 6 hours ago Exited kube-scheduler 0 be8e705c70cd7 537be58350b2f 0369cf4303ffd 6 hours ago Exited etcd 0 7bb4abb3b6946 414376f3e4cd8 4d4f44df9f905 6 hours ago Exited patch 2 5840d19e7f0e3 bc301497e1235 jettech/kube-webhook-certgen@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7 6 hours ago Exited create 0 3c96afa00dec2
==> coredns [0533aa588b70] <== .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d
==> coredns [85d5de380ef9] <== [INFO] plugin/ready: Still waiting on: "kubernetes" .:53 [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 CoreDNS-1.7.0 linux/amd64, go1.14.4, f59c03d [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" I0104 03:57:04.367689 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-04 03:56:43.336245835 +0000 UTC m=+3.337954023) (total time: 21.005381297s): Trace[2019727887]: [21.005381297s] [21.005381297s] END E0104 03:57:04.369755 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused I0104 03:57:04.496591 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-04 03:56:43.33648462 +0000 UTC m=+3.338192749) (total time: 21.160056372s): Trace[939984059]: [21.160056372s] [21.160056372s] END I0104 03:57:04.496598 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-04 03:56:43.336327148 +0000 UTC m=+3.338035315) (total time: 21.160236718s): Trace[911902081]: [21.160236718s] [21.160236718s] END E0104 03:57:04.496661 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused E0104 03:57:04.496667 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
==> describe nodes <== Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=9f1e482427589ff8451c4723b6ba53bb9742fbb1 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_01_04T09_24_32_0700 minikube.k8s.io/version=v1.16.0 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Mon, 04 Jan 2021 01:24:27 +0000 Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Mon, 04 Jan 2021 07:28:46 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Mon, 04 Jan 2021 07:27:07 +0000 Mon, 04 Jan 2021 05:51:15 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Mon, 04 Jan 2021 07:27:07 +0000 Mon, 04 Jan 2021 05:51:15 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Mon, 04 Jan 2021 07:27:07 +0000 Mon, 04 Jan 2021 05:51:15 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Mon, 04 Jan 2021 07:27:07 +0000 Mon, 04 Jan 2021 05:51:15 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 4 ephemeral-storage: 38815216Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3861292Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 38815216Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3861292Ki pods: 110 System Info: Machine ID: 80ccc7692ad1434f889b859014fe7343 System UUID: 6847bca0-1c99-4235-8954-c98176329b76 Boot ID: b603177a-56ae-4008-b825-de53c595f98f Kernel Version: 3.10.0-1160.el7.x86_64 OS Image: Ubuntu 20.04.1 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.0 Kubelet Version: v1.20.0 Kube-Proxy Version: v1.20.0 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (11 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
kube-system coredns-54d67798b7-flmh4 100m (2%) 0 (0%) 70Mi (1%) 170Mi (4%) 5h38m kube-system etcd-minikube 100m (2%) 0 (0%) 100Mi (2%) 0 (0%) 5h37m kube-system ingress-nginx-controller-5f568d55f8-ljgnf 100m (2%) 0 (0%) 90Mi (2%) 0 (0%) 4h14m kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 5h38m kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 5h38m kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m24s kube-system kube-proxy-52x27 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5h38m kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 5h37m kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6h4m kubernetes-dashboard dashboard-metrics-scraper-c85578d8-cddgs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h57m kubernetes-dashboard kubernetes-dashboard-7db476d994-w2qgq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3h57m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits
cpu 850m (21%) 0 (0%) memory 260Mi (6%) 170Mi (4%) ephemeral-storage 100Mi (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events:
==> dmesg <== [Jan 4 01:08] ACPI: RSDP 00000000000f6a00 00024 (v02 PTLTD ) [ +0.000000] ACPI: XSDT 00000000bfedc633 0005C (v01 INTEL 440BX 06040000 VMW 01324272) [ +0.000000] ACPI: FACP 00000000bfefee73 000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) [ +0.000000] ACPI: DSDT 00000000bfedd9e8 2148B (v01 PTLTD Custom 06040000 MSFT 03000001) [ +0.000000] ACPI: FACS 00000000bfefffc0 00040 [ +0.000000] ACPI: BOOT 00000000bfedd9c0 00028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) [ +0.000000] ACPI: APIC 00000000bfedd27e 00742 (v01 PTLTD ? APIC 06040000 LTP 00000000) [ +0.000000] ACPI: MCFG 00000000bfedd242 0003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) [ +0.000000] ACPI: SRAT 00000000bfedc72f 008D0 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) [ +0.000000] ACPI: HPET 00000000bfedc6f7 00038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) [ +0.000000] ACPI: WAET 00000000bfedc6cf 00028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) [ +0.000000] Zone ranges: [ +0.000000] DMA [mem 0x00001000-0x00ffffff] [ +0.000000] DMA32 [mem 0x01000000-0xffffffff] [ +0.000000] Normal [mem 0x100000000-0x13fffffff] [ +0.000000] Movable zone start for each node [ +0.000000] Early memory node ranges [ +0.000000] node 0: [mem 0x00001000-0x0009dfff] [ +0.000000] node 0: [mem 0x00100000-0xbfecffff] [ +0.000000] node 0: [mem 0xbff00000-0xbfffffff] [ +0.000000] node 0: [mem 0x100000000-0x13fffffff] [ +0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1032024 [ +0.000000] Policy zone: Normal [ +0.000000] ACPI: All ACPI Tables successfully acquired [ +0.037682] core: CPUID marked event: 'cpu cycles' unavailable [ +0.000001] core: CPUID marked event: 'instructions' unavailable [ +0.000001] core: CPUID marked event: 'bus cycles' unavailable [ +0.000001] core: CPUID marked event: 'cache references' unavailable [ +0.000000] core: CPUID marked event: 'cache misses' unavailable [ +0.000001] core: CPUID marked event: 'branch instructions' unavailable [ +0.000001] core: CPUID marked event: 'branch misses' unavailable [ +0.001580] NMI watchdog: disabled (cpu0): hardware events not enabled [ +0.009892] pmd_set_huge: Cannot satisfy [mem 0xf0000000-0xf0200000] with a huge-page mapping due to MTRR override. [ +0.047549] ACPI: Enabled 4 GPEs in block 00 to 0F [ +1.001601] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ +0.845741] sd 2:0:0:0: [sda] Assuming drive cache: write through [ +9.524087] piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! [Jan 4 01:09] TECH PREVIEW: Overlay filesystem may not be fully supported. Please review provided documentation for limitations. [Jan 4 01:15] sched: RT throttling activated [Jan 4 01:40] hrtimer: interrupt took 6654534 ns
==> etcd [537be58350b2] <== 2021-01-04 03:41:50.152403 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/ingress-nginx-controller-5f568d55f8-ljgnf.1656ea2c82f0d5d3\" " with result "range_response_count:1 size:951" took too long (139.122732ms) to execute 2021-01-04 03:41:50.152540 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:2 size:6718" took too long (147.834498ms) to execute 2021-01-04 03:41:50.152662 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/ingress-controller-leader-nginx\" " with result "range_response_count:1 size:608" took too long (150.569787ms) to execute 2021-01-04 03:41:50.152765 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (151.129982ms) to execute 2021-01-04 03:41:50.152835 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (127.001044ms) to execute 2021-01-04 03:41:50.153829 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-7db476d994-w2qgq\" " with result "range_response_count:1 size:3877" took too long (128.63844ms) to execute 2021-01-04 03:41:50.154116 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/tcp-services\" " with result "range_response_count:1 size:691" took too long (128.894651ms) to execute 2021-01-04 03:41:50.154633 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-7db476d994-w2qgq\" " with result "range_response_count:1 size:3877" took too long (129.465448ms) to execute 2021-01-04 03:41:50.416624 W | etcdserver: read-only range request "key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" " with result "range_response_count:5 size:6385" took too long (259.072494ms) to execute 2021-01-04 03:41:50.418663 W | etcdserver: read-only range request "key:\"/registry/ranges/serviceips\" " with result "range_response_count:1 size:41235" took too long (387.575326ms) to execute 2021-01-04 03:41:50.425704 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:420" took too long (191.526252ms) to execute 2021-01-04 03:41:50.428255 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (130.416905ms) to execute 2021-01-04 03:41:50.428935 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/ingress-nginx\" " with result "range_response_count:1 size:897" took too long (194.352104ms) to execute 2021-01-04 03:41:50.432765 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-c85578d8-cddgs\" " with result "range_response_count:1 size:3960" took too long (130.911452ms) to execute 2021-01-04 03:41:50.433098 W | etcdserver: read-only range request "key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" " with result "range_response_count:5 size:1861" took too long (162.690027ms) to execute 2021-01-04 03:41:50.433524 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (135.37208ms) to execute 2021-01-04 03:41:58.766264 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:42:08.766513 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:42:18.764655 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:42:28.767962 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:42:38.765625 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:42:48.765310 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:42:48.829139 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" " with result "range_response_count:52 size:40641" took too long (124.549147ms) to execute 2021-01-04 03:42:58.766102 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:43:08.765022 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:43:18.765542 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:43:25.071937 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:1 size:871" took too long (220.464404ms) to execute 2021-01-04 03:43:28.766477 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:43:38.766276 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:43:48.765258 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:43:58.766787 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:44:08.767006 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:44:18.766863 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:44:28.877013 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:44:29.876432 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (266.206188ms) to execute 2021-01-04 03:44:38.766226 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:44:48.765169 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:44:58.766269 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:45:08.765280 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:45:18.766177 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:45:23.309931 I | mvcc: store.index: compact 6760 2021-01-04 03:45:23.461350 I | mvcc: finished scheduled compaction at 6760 (took 82.055527ms) 2021-01-04 03:45:28.765803 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:45:38.765969 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:45:48.764662 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:45:58.766809 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:46:08.766056 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:46:18.766725 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:46:28.765425 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:46:38.765059 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:46:48.767904 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:46:58.764736 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:47:08.764801 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:47:18.837415 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:47:28.997752 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:47:38.765334 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:47:48.765026 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:47:58.764606 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 03:48:05.589984 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:420" took too long (125.670394ms) to execute 2021-01-04 03:48:05.591252 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1107" took too long (102.25071ms) to execute
==> etcd [66576570292b] <== 2021-01-04 07:19:44.776247 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:19:54.776425 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:20:04.776672 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:20:14.777272 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:20:24.776451 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:20:34.776099 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:20:44.951306 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:20:54.777672 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:21:04.777424 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:21:14.777037 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:21:21.558398 I | mvcc: store.index: compact 17234 2021-01-04 07:21:21.559551 I | mvcc: finished scheduled compaction at 17234 (took 844.229µs) 2021-01-04 07:21:24.775790 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:21:34.775777 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:21:44.776231 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:21:54.776283 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:22:04.776359 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:22:14.775829 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:22:24.776175 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:22:25.617560 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/ingress-controller-leader-nginx\" " with result "range_response_count:1 size:610" took too long (154.182365ms) to execute 2021-01-04 07:22:34.776224 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:22:44.776658 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:22:54.775775 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:23:04.776830 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:23:14.775889 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:23:24.775567 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:23:34.775916 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:23:44.777358 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:23:54.777128 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:24:04.777617 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:24:14.776531 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:24:24.777111 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:24:34.776258 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:24:44.778197 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:24:54.775913 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:25:04.777307 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:25:14.776779 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:25:24.776706 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:25:34.777370 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:25:44.777200 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:25:54.777400 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:26:04.777071 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:26:14.776444 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:26:21.563714 I | mvcc: store.index: compact 17483 2021-01-04 07:26:21.564340 I | mvcc: finished scheduled compaction at 17483 (took 465.932µs) 2021-01-04 07:26:24.776849 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:26:34.777361 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:26:44.777400 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:26:54.775665 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:27:04.776648 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:27:14.776766 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:27:24.776151 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:27:34.776476 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:27:44.776382 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:27:54.776773 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:28:04.777632 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:28:14.775952 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:28:24.776622 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:28:34.776409 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2021-01-04 07:28:44.818230 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <== 07:28:49 up 6:20, 0 users, load average: 0.29, 0.41, 0.42 Linux minikube 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.1 LTS"
==> kube-apiserver [5fa4a8d2e9da] <== I0104 07:16:09.355764 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:16:09.355781 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0104 07:16:25.148060 1 watcher.go:220] watch chan error: etcdserver: mvcc: required revision has been compacted
I0104 07:16:49.424392 1 client.go:360] parsed scheme: "passthrough"
I0104 07:16:49.424609 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:16:49.424622 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:17:22.335703 1 client.go:360] parsed scheme: "passthrough"
I0104 07:17:22.335847 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:17:22.335872 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:18:03.917120 1 client.go:360] parsed scheme: "passthrough"
I0104 07:18:03.917343 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:18:03.917571 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:18:39.628465 1 client.go:360] parsed scheme: "passthrough"
I0104 07:18:39.629468 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:18:39.629511 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:19:09.937139 1 client.go:360] parsed scheme: "passthrough"
I0104 07:19:09.937281 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:19:09.937303 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:19:44.064743 1 client.go:360] parsed scheme: "passthrough"
I0104 07:19:44.064801 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:19:44.064810 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:20:29.031228 1 client.go:360] parsed scheme: "passthrough"
I0104 07:20:29.031307 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:20:29.031320 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:21:07.404440 1 client.go:360] parsed scheme: "passthrough"
I0104 07:21:07.404534 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:21:07.404553 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:21:49.460707 1 client.go:360] parsed scheme: "passthrough"
I0104 07:21:49.460921 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:21:49.460963 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:22:29.375526 1 client.go:360] parsed scheme: "passthrough"
I0104 07:22:29.375719 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:22:29.375749 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:23:05.127478 1 client.go:360] parsed scheme: "passthrough"
I0104 07:23:05.127560 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:23:05.127571 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:23:48.230380 1 client.go:360] parsed scheme: "passthrough"
I0104 07:23:48.230432 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:23:48.230442 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:24:21.877715 1 client.go:360] parsed scheme: "passthrough"
I0104 07:24:21.877778 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:24:21.877787 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:25:00.231481 1 client.go:360] parsed scheme: "passthrough"
I0104 07:25:00.231683 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:25:00.232084 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:25:43.431604 1 client.go:360] parsed scheme: "passthrough"
I0104 07:25:43.431678 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:25:43.431695 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:26:22.166890 1 client.go:360] parsed scheme: "passthrough"
I0104 07:26:22.166960 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:26:22.166974 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:27:00.469780 1 client.go:360] parsed scheme: "passthrough"
I0104 07:27:00.469821 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:27:00.469831 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:27:40.053267 1 client.go:360] parsed scheme: "passthrough"
I0104 07:27:40.053344 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:27:40.053357 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 07:28:22.092015 1 client.go:360] parsed scheme: "passthrough"
I0104 07:28:22.092114 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 07:28:22.092123 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [a4ce5e4aa4c5] <== I0104 03:46:50.479325 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0104 03:47:32.131948 1 client.go:360] parsed scheme: "passthrough" I0104 03:47:32.131990 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 03:47:32.131996 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 03:48:04.569509 1 trace.go:205] Trace[318975150]: "GuaranteedUpdate etcd3" type:v1.Endpoints (04-Jan-2021 03:48:03.567) (total time: 1001ms):
Trace[318975150]: ---"Transaction prepared" 212ms (03:48:00.780)
Trace[318975150]: ---"Transaction committed" 788ms (03:48:00.569)
Trace[318975150]: [1.001934289s] [1.001934289s] END
I0104 03:48:05.592487 1 trace.go:205] Trace[1032395346]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2 (04-Jan-2021 03:48:04.695) (total time: 897ms):
Trace[1032395346]: ---"About to write a response" 896ms (03:48:00.592)
Trace[1032395346]: [897.096356ms] [897.096356ms] END
I0104 03:48:05.664168 1 trace.go:205] Trace[1518073843]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/af46c47,client:127.0.0.1 (04-Jan-2021 03:48:04.695) (total time: 968ms):
Trace[1518073843]: ---"About to write a response" 968ms (03:48:00.663)
Trace[1518073843]: [968.167443ms] [968.167443ms] END
I0104 03:48:07.075098 1 trace.go:205] Trace[1599889982]: "GuaranteedUpdate etcd3" type:core.Endpoints (04-Jan-2021 03:48:05.993) (total time: 1073ms):
Trace[1599889982]: ---"Transaction committed" 1072ms (03:48:00.067)
Trace[1599889982]: [1.073949233s] [1.073949233s] END
I0104 03:48:07.076319 1 trace.go:205] Trace[1619896602]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.49.2 (04-Jan-2021 03:48:05.993) (total time: 1082ms):
Trace[1619896602]: ---"Object stored in database" 1081ms (03:48:00.075)
Trace[1619896602]: [1.082667543s] [1.082667543s] END
I0104 03:48:07.097494 1 trace.go:205] Trace[130914854]: "List etcd3" key:/pods/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (04-Jan-2021 03:48:06.199) (total time: 897ms):
Trace[130914854]: [897.674947ms] [897.674947ms] END
I0104 03:48:07.097650 1 trace.go:205] Trace[1388587025]: "List" url:/api/v1/namespaces/default/pods,user-agent:dashboard/v2.1.0,client:172.17.0.5 (04-Jan-2021 03:48:06.199) (total time: 897ms):
Trace[1388587025]: ---"Listing from storage done" 897ms (03:48:00.097)
Trace[1388587025]: [897.858704ms] [897.858704ms] END
I0104 03:48:07.097761 1 trace.go:205] Trace[1820210543]: "List etcd3" key:/namespaces,resourceVersion:,resourceVersionMatch:,limit:0,continue: (04-Jan-2021 03:48:06.199) (total time: 898ms):
Trace[1820210543]: [898.243381ms] [898.243381ms] END
I0104 03:48:07.097901 1 trace.go:205] Trace[1355619808]: "List" url:/api/v1/namespaces,user-agent:dashboard/v2.1.0,client:172.17.0.5 (04-Jan-2021 03:48:06.199) (total time: 898ms):
Trace[1355619808]: ---"Listing from storage done" 898ms (03:48:00.097)
Trace[1355619808]: [898.428833ms] [898.428833ms] END
I0104 03:48:07.120386 1 trace.go:205] Trace[951283308]: "List etcd3" key:/events/default,resourceVersion:,resourceVersionMatch:,limit:0,continue: (04-Jan-2021 03:48:06.198) (total time: 921ms):
Trace[951283308]: [921.374769ms] [921.374769ms] END
I0104 03:48:07.120514 1 trace.go:205] Trace[1233497855]: "List" url:/api/v1/namespaces/default/events,user-agent:dashboard/v2.1.0,client:172.17.0.5 (04-Jan-2021 03:48:06.198) (total time: 921ms):
Trace[1233497855]: ---"Listing from storage done" 921ms (03:48:00.120)
Trace[1233497855]: [921.53833ms] [921.53833ms] END
I0104 03:48:07.462920 1 client.go:360] parsed scheme: "passthrough"
I0104 03:48:07.463012 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0104 03:48:07.463036 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0104 03:48:08.596053 1 controller.go:89] Shutting down OpenAPI AggregationController
I0104 03:48:09.311151 1 dynamic_cafile_content.go:182] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0104 03:48:09.311201 1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0104 03:48:09.311252 1 dynamic_cafile_content.go:182] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0104 03:48:09.311275 1 dynamic_serving_content.go:145] Shutting down aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key
I0104 03:48:09.311405 1 controller.go:181] Shutting down kubernetes service endpoint reconciler
I0104 03:48:09.312121 1 tlsconfig.go:255] Shutting down DynamicServingCertificateController
I0104 03:48:09.312211 1 dynamic_serving_content.go:145] Shutting down serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0104 03:48:09.312242 1 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0104 03:48:09.312858 1 autoregister_controller.go:165] Shutting down autoregister controller
I0104 03:48:09.313787 1 available_controller.go:487] Shutting down AvailableConditionController
I0104 03:48:09.313832 1 controller.go:123] Shutting down OpenAPI controller
I0104 03:48:09.313864 1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
I0104 03:48:09.364187 1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
I0104 03:48:09.313140 1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0104 03:48:10.112005 1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I0104 03:48:10.112962 1 establishing_controller.go:87] Shutting down EstablishingController
I0104 03:48:10.113035 1 naming_controller.go:302] Shutting down NamingConditionController
I0104 03:48:10.162745 1 crd_finalizer.go:278] Shutting down CRDFinalizer
I0104 03:48:10.163011 1 customresource_discovery_controller.go:245] Shutting down DiscoveryController
I0104 03:48:10.164504 1 secure_serving.go:241] Stopped listening on [::]:8443
I0104 03:48:10.164733 1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
==> kube-controller-manager [dca4a631962e] <== Trace[1296950125]: ---"Objects listed" 36713ms (05:50:00.189) Trace[1296950125]: [36.713515683s] [36.713515683s] END I0104 05:50:47.190077 1 trace.go:205] Trace[1031543604]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:09.392) (total time: 37797ms): Trace[1031543604]: ---"Objects listed" 37797ms (05:50:00.190) Trace[1031543604]: [37.797791304s] [37.797791304s] END I0104 05:50:47.190159 1 trace.go:205] Trace[1113586835]: "Reflector ListAndWatch" name:k8s.io/client-go/metadata/metadatainformer/informer.go:90 (04-Jan-2021 05:50:09.594) (total time: 37595ms): Trace[1113586835]: ---"Objects listed" 37595ms (05:50:00.190) Trace[1113586835]: [37.595880447s] [37.595880447s] END I0104 05:50:47.190381 1 trace.go:205] Trace[1862815740]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:10.284) (total time: 36905ms): Trace[1862815740]: ---"Objects listed" 36905ms (05:50:00.190) Trace[1862815740]: [36.905793724s] [36.905793724s] END I0104 05:50:47.190485 1 trace.go:205] Trace[116725689]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:09.391) (total time: 37798ms): Trace[116725689]: ---"Objects listed" 37798ms (05:50:00.190) Trace[116725689]: [37.798578561s] [37.798578561s] END I0104 05:50:47.190611 1 trace.go:205] Trace[1769405788]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:09.594) (total time: 37596ms): Trace[1769405788]: ---"Objects listed" 37596ms (05:50:00.190) Trace[1769405788]: [37.596181175s] [37.596181175s] END I0104 05:50:47.190746 1 trace.go:205] Trace[1306223803]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:09.594) (total time: 37596ms): Trace[1306223803]: ---"Objects listed" 37596ms (05:50:00.190) Trace[1306223803]: [37.596217289s] [37.596217289s] END I0104 05:50:47.567010 1 trace.go:205] Trace[1484669712]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:09.594) (total time: 37972ms): Trace[1484669712]: ---"Objects listed" 37972ms (05:50:00.566) Trace[1484669712]: [37.972298467s] [37.972298467s] END I0104 05:50:47.811471 1 trace.go:205] Trace[2039757837]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:08.745) (total time: 39065ms): Trace[2039757837]: ---"Objects listed" 39065ms (05:50:00.811) Trace[2039757837]: [39.065757195s] [39.065757195s] END I0104 05:50:48.510912 1 trace.go:205] Trace[212218989]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:09.821) (total time: 38689ms): Trace[212218989]: ---"Objects listed" 38688ms (05:50:00.510) Trace[212218989]: [38.689020497s] [38.689020497s] END I0104 05:50:58.311284 1 trace.go:205] Trace[1840089984]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:09.392) (total time: 48918ms): Trace[1840089984]: ---"Objects listed" 48918ms (05:50:00.311) Trace[1840089984]: [48.918271201s] [48.918271201s] END I0104 05:51:15.090773 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node minikube status is now: NodeNotReady" I0104 05:51:26.268169 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994-w2qgq" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready" E0104 05:51:29.331723 1 controller_utils.go:201] unable to taint [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2021-01-04 05:51:16.023393571 +0000 UTC m=+6971.530600856,}] unresponsive Node "minikube": etcdserver: request timed out E0104 15:28:50.590900 173209 out.go:317] unable to parse "E0104 05:51:29.460257 1 node_lifecycle_controller.go:601] Failed to taint NoSchedule on node, requeue it: failed to swap taints of node &Node{ObjectMeta:{minikube 42de8336-4c5c-4c45-8b23-f3204683000d 12916 0 2021-01-04 01:24:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:minikube kubernetes.io/os:linux minikube.k8s.io/commit:9f1e482427589ff8451c4723b6ba53bb9742fbb1 minikube.k8s.io/name:minikube minikube.k8s.io/updated_at:2021_01_04T09_24_32_0700 minikube.k8s.io/version:v1.16.0 node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-01-04 01:24:27 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:volumes.kubernetes.io/controller-managed-attach-detach\":{}},\"f:labels\":{\".\":{},\"f:beta.kubernetes.io/arch\":{},\"f:beta.kubernetes.io/os\":{},\"f:kubernetes.io/arch\":{},\"f:kubernetes.io/hostname\":{},\"f:kubernetes.io/os\":{}}},\"f:status\":{\"f:addresses\":{\".\":{},\"k:{\\"type\\":\\"Hostname\\"}\":{\".\":{},\"f:address\":{},\"f:type\":{}},\"k:{\\"type\\":\\"InternalIP\\"}\":{\".\":{},\"f:address\":{},\"f:type\":{}}},\"f:allocatable\":{\".\":{},\"f:cpu\":{},\"f:ephemeral-storage\":{},\"f:hugepages-1Gi\":{},\"f:hugepages-2Mi\":{},\"f:memory\":{},\"f:pods\":{}},\"f:capacity\":{\".\":{},\"f:cpu\":{},\"f:ephemeral-storage\":{},\"f:hugepages-1Gi\":{},\"f:hugepages-2Mi\":{},\"f:memory\":{},\"f:pods\":{}},\"f:conditions\":{\".\":{},\"k:{\\"type\\":\\"DiskPressure\\"}\":{\".\":{},\"f:lastHeartbeatTime\":{},\"f:type\":{}},\"k:{\\"type\\":\\"MemoryPressure\\"}\":{\".\":{},\"f:lastHeartbeatTime\":{},\"f:type\":{}},\"k:{\\"type\\":\\"PIDPressure\\"}\":{\".\":{},\"f:lastHeartbeatTime\":{},\"f:type\":{}},\"k:{\\"type\\":\\"Ready\\"}\":{\".\":{},\"f:lastHeartbeatTime\":{},\"f:type\":{}}},\"f:daemonEndpoints\":{\"f:kubeletEndpoint\":{\"f:Port\":{}}},\"f:images\":{},\"f:nodeInfo\":{\"f:architecture\":{},\"f:bootID\":{},\"f:containerRuntimeVersion\":{},\"f:kernelVersion\":{},\"f:kubeProxyVersion\":{},\"f:kubeletVersion\":{},\"f:machineID\":{},\"f:operatingSystem\":{},\"f:osImage\":{},\"f:systemUUID\":{}}}}} {kubeadm Update v1 2021-01-04 01:24:31 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\"f:kubeadm.alpha.kubernetes.io/cri-socket\":{}},\"f:labels\":{\"f:node-role.kubernetes.io/control-plane\":{},\"f:node-role.kubernetes.io/master\":{}}}}} {kubectl-label Update v1 2021-01-04 01:24:35 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\"f:minikube.k8s.io/commit\":{},\"f:minikube.k8s.io/name\":{},\"f:minikube.k8s.io/updated_at\":{},\"f:minikube.k8s.io/version\":{}}}}} {kube-controller-manager Update v1 2021-01-04 05:51:12 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:annotations\":{\"f:node.alpha.kubernetes.io/ttl\":{}}},\"f:spec\":{\"f:podCIDR\":{},\"f:podCIDRs\":{\".\":{},\"v:\\"10.244.0.0/24\\"\":{}}},\"f:status\":{\"f:conditions\":{\"k:{\\"type\\":\\"DiskPressure\\"}\":{\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{}},\"k:{\\"type\\":\\"MemoryPressure\\"}\":{\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{}},\"k:{\\"type\\":\\"PIDPressure\\"}\":{\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{}},\"k:{\\"type\\":\\"Ready\\"}\":{\"f:lastTransitionTime\":{},\"f:message\":{},\"f:reason\":{},\"f:status\":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{39746781184 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3953963008 0} {} 3861292Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{39746781184 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3953963008 0} {} 3861292Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2021-01-04 05:47:15 +0000 UTC,LastTransitionTime:2021-01-04 05:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2021-01-04 05:47:15 +0000 UTC,LastTransitionTime:2021-01-04 05:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2021-01-04 05:47:15 +0000 UTC,LastTransitionTime:2021-01-04 05:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2021-01-04 05:47:15 +0000 UTC,LastTransitionTime:2021-01-04 05:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.49.2,},NodeAddress{Type:Hostname,Address:minikube,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80ccc7692ad1434f889b859014fe7343,SystemUUID:6847bca0-1c99-4235-8954-c98176329b76,BootID:b603177a-56ae-4008-b825-de53c595f98f,KernelVersion:3.10.0-1160.el7.x86_64,OSImage:Ubuntu 20.04.1 LTS,ContainerRuntimeVersion:docker://20.10.0,KubeletVersion:v1.20.0,KubeProxyVersion:v1.20.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[lifengquandocker/nodejsback@sha256:fcb78487db924f49afc067b110cad734d76e938c0663be424479adcf8f358aa4 lifengquandocker/nodejsback:1.0],SizeBytes:1078935213,},ContainerImage{Names:[registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f registry.cn-hangzhou.aliyuncs.com/lifengquanailidokcer/registry.cn-hangzhou.aliyuncs.com@sha256:0100c173327bbb124c76ea1511dade4cec718234c23f8e7a41f27ad03f361431 registry.cn-hangzhou.aliyuncs.com/google_containers/controller:v0.40.2 registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v0.40.2 registry.cn-hangzhou.aliyuncs.com/lifengquanailidokcer/registry.cn-hangzhou.aliyuncs.com:v0.40.2],SizeBytes:285704097,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0 registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 kubernetesui/dashboard:v2.1.0 registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:8b8125d7a6e4225b08f04f65ca947b27d0cc86380bf09fab890cc80408230114 k8s.gcr.io/kube-apiserver:v1.20.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.0],SizeBytes:121665018,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:40423415eebbd598d1c2660a0a38606ad1d949ea9404c405eaf25929163b479d k8s.gcr.io/kube-proxy:v1.20.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.0],SizeBytes:118400203,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:00ccc3a5735e82d53bc26054d594a942fae64620a6f84018c057a519ba7ed1dc k8s.gcr.io/kube-controller-manager:v1.20.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0],SizeBytes:115844602,},ContainerImage{Names:[jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689 jettech/kube-webhook-certgen:v1.3.0],SizeBytes:54697790,},ContainerImage{Names:[jettech/kube-webhook-certgen@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7 jettech/kube-webhook-certgen:v1.2.2],SizeBytes:49003629,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:beaa710325047fa9c867eff4ab9af38d9c2acec05ac5b416c708c304f76bdbef k8s.gcr.io/kube-scheduler:v1.20.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.0],SizeBytes:46384634,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf kubernetesui/metrics-scraper:v1.0.4 registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper:v1.0.4],SizeBytes:36937728,},ContainerImage{Names:[gcr.io/k8s-minikube/storage-provisioner@sha256:06f83c679a723d938b8776510d979c69549ad7df516279981e23554b3e68572f gcr.io/k8s-minikube/storage-provisioner:v4 registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v4],SizeBytes:29683712,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}\n": template: E0104 05:51:29.460257 1 node_lifecycle_controller.go:601] Failed to taint NoSchedule on node , requeue it: failed to swap taints of node &Node{ObjectMeta:{minikube 42de8336-4c5c-4c45-8b23-f3204683000d 12916 0 2021-01-04 01:24:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:minikube kubernetes.io/os:linux minikube.k8s.io/commit:9f1e482427589ff8451c4723b6ba53bb9742fbb1 minikube.k8s.io/name:minikube minikube.k8s.io/updated_at:2021_01_04T09_24_32_0700 minikube.k8s.io/version:v1.16.0 node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-01-04 01:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-01-04 01:24:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kubectl-label Update v1 2021-01-04 01:24:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:minikube.k8s.io/commit":{},"f:minikube.k8s.io/name":{},"f:minikube.k8s.io/updated_at":{},"f:minikube.k8s.io/version":{}}}}} {kube-controller-manager Update v1 2021-01-04 05:51:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{39746781184 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3953963008 0} {} 3861292Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{39746781184 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3953963008 0} {} 3861292Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2021-01-04 05:47:15 +0000 UTC,LastTransitionTime:2021-01-04 05:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2021-01-04 05:47:15 +0000 UTC,LastTransitionTime:2021-01-04 05:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2021-01-04 05:47:15 +0000 UTC,LastTransitionTime:2021-01-04 05:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2021-01-04 05:47:15 +0000 UTC,LastTransitionTime:2021-01-04 05:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.49.2,},NodeAddress{Type:Hostname,Address:minikube,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80ccc7692ad1434f889b859014fe7343,SystemUUID:6847bca0-1c99-4235-8954-c98176329b76,BootID:b603177a-56ae-4008-b825-de53c595f98f,KernelVersion:3.10.0-1160.el7.x86_64,OSImage:Ubuntu 20.04.1 LTS,ContainerRuntimeVersion:docker://20.10.0,KubeletVersion:v1.20.0,KubeProxyVersion:v1.20.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[lifengquandocker/nodejsback@sha256:fcb78487db924f49afc067b110cad734d76e938c0663be424479adcf8f358aa4 lifengquandocker/nodejsback:1.0],SizeBytes:1078935213,},ContainerImage{Names:[registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f registry.cn-hangzhou.aliyuncs.com/lifengquanailidokcer/registry.cn-hangzhou.aliyuncs.com@sha256:0100c173327bbb124c76ea1511dade4cec718234c23f8e7a41f27ad03f361431 registry.cn-hangzhou.aliyuncs.com/google_containers/controller:v0.40.2 registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v0.40.2 registry.cn-hangzhou.aliyuncs.com/lifengquanailidokcer/registry.cn-hangzhou.aliyuncs.com:v0.40.2],SizeBytes:285704097,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0 registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 kubernetesui/dashboard:v2.1.0 registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:8b8125d7a6e4225b08f04f65ca947b27d0cc86380bf09fab890cc80408230114 k8s.gcr.io/kube-apiserver:v1.20.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.0],SizeBytes:121665018,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:40423415eebbd598d1c2660a0a38606ad1d949ea9404c405eaf25929163b479d k8s.gcr.io/kube-proxy:v1.20.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.0],SizeBytes:118400203,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:00ccc3a5735e82d53bc26054d594a942fae64620a6f84018c057a519ba7ed1dc k8s.gcr.io/kube-controller-manager:v1.20.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0],SizeBytes:115844602,},ContainerImage{Names:[jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689 jettech/kube-webhook-certgen:v1.3.0],SizeBytes:54697790,},ContainerImage{Names:[jettech/kube-webhook-certgen@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7 jettech/kube-webhook-certgen:v1.2.2],SizeBytes:49003629,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:beaa710325047fa9c867eff4ab9af38d9c2acec05ac5b416c708c304f76bdbef k8s.gcr.io/kube-scheduler:v1.20.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.0],SizeBytes:46384634,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf kubernetesui/metrics-scraper:v1.0.4 registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper:v1.0.4],SizeBytes:36937728,},ContainerImage{Names:[gcr.io/k8s-minikube/storage-provisioner@sha256:06f83c679a723d938b8776510d979c69549ad7df516279981e23554b3e68572f gcr.io/k8s-minikube/storage-provisioner:v4 registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v4],SizeBytes:29683712,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
:1: unexpected "}" in operand - returning raw string.
E0104 05:51:29.460257 1 node_lifecycle_controller.go:601] Failed to taint NoSchedule on node , requeue it: failed to swap taints of node &Node{ObjectMeta:{minikube 42de8336-4c5c-4c45-8b23-f3204683000d 12916 0 2021-01-04 01:24:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:minikube kubernetes.io/os:linux minikube.k8s.io/commit:9f1e482427589ff8451c4723b6ba53bb9742fbb1 minikube.k8s.io/name:minikube minikube.k8s.io/updated_at:2021_01_04T09_24_32_0700 minikube.k8s.io/version:v1.16.0 node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-01-04 01:24:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-01-04 01:24:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kubectl-label Update v1 2021-01-04 01:24:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:minikube.k8s.io/commit":{},"f:minikube.k8s.io/name":{},"f:minikube.k8s.io/updated_at":{},"f:minikube.k8s.io/version":{}}}}} {kube-controller-manager Update v1 2021-01-04 05:51:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{39746781184 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3953963008 0} {} 3861292Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{39746781184 0} {} BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{3953963008 0} {} 3861292Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2021-01-04 05:47:15 +0000 UTC,LastTransitionTime:2021-01-04 05:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2021-01-04 05:47:15 +0000 UTC,LastTransitionTime:2021-01-04 05:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2021-01-04 05:47:15 +0000 UTC,LastTransitionTime:2021-01-04 05:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2021-01-04 05:47:15 +0000 UTC,LastTransitionTime:2021-01-04 05:48:50 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:192.168.49.2,},NodeAddress{Type:Hostname,Address:minikube,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:80ccc7692ad1434f889b859014fe7343,SystemUUID:6847bca0-1c99-4235-8954-c98176329b76,BootID:b603177a-56ae-4008-b825-de53c595f98f,KernelVersion:3.10.0-1160.el7.x86_64,OSImage:Ubuntu 20.04.1 LTS,ContainerRuntimeVersion:docker://20.10.0,KubeletVersion:v1.20.0,KubeProxyVersion:v1.20.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[lifengquandocker/nodejsback@sha256:fcb78487db924f49afc067b110cad734d76e938c0663be424479adcf8f358aa4 lifengquandocker/nodejsback:1.0],SizeBytes:1078935213,},ContainerImage{Names:[registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f registry.cn-hangzhou.aliyuncs.com/lifengquanailidokcer/registry.cn-hangzhou.aliyuncs.com@sha256:0100c173327bbb124c76ea1511dade4cec718234c23f8e7a41f27ad03f361431 registry.cn-hangzhou.aliyuncs.com/google_containers/controller:v0.40.2 registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v0.40.2 registry.cn-hangzhou.aliyuncs.com/lifengquanailidokcer/registry.cn-hangzhou.aliyuncs.com:v0.40.2],SizeBytes:285704097,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0 registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 kubernetesui/dashboard:v2.1.0 registry.cn-hangzhou.aliyuncs.com/google_containers/dashboard:v2.1.0],SizeBytes:225733746,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:8b8125d7a6e4225b08f04f65ca947b27d0cc86380bf09fab890cc80408230114 k8s.gcr.io/kube-apiserver:v1.20.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.0],SizeBytes:121665018,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:40423415eebbd598d1c2660a0a38606ad1d949ea9404c405eaf25929163b479d k8s.gcr.io/kube-proxy:v1.20.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.0],SizeBytes:118400203,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:00ccc3a5735e82d53bc26054d594a942fae64620a6f84018c057a519ba7ed1dc k8s.gcr.io/kube-controller-manager:v1.20.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0],SizeBytes:115844602,},ContainerImage{Names:[jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689 jettech/kube-webhook-certgen:v1.3.0],SizeBytes:54697790,},ContainerImage{Names:[jettech/kube-webhook-certgen@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7 jettech/kube-webhook-certgen:v1.2.2],SizeBytes:49003629,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:beaa710325047fa9c867eff4ab9af38d9c2acec05ac5b416c708c304f76bdbef k8s.gcr.io/kube-scheduler:v1.20.0 registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.0],SizeBytes:46384634,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:73ca82b4ce829766d4f1f10947c3a338888f876fbed0540dc849c89ff256e90c k8s.gcr.io/coredns:1.7.0 registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0],SizeBytes:45227747,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:555981a24f184420f3be0c79d4efb6c948a85cfce84034f85a563f4151a81cbf kubernetesui/metrics-scraper:v1.0.4 registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-scraper:v1.0.4],SizeBytes:36937728,},ContainerImage{Names:[gcr.io/k8s-minikube/storage-provisioner@sha256:06f83c679a723d938b8776510d979c69549ad7df516279981e23554b3e68572f gcr.io/k8s-minikube/storage-provisioner:v4 registry.cn-hangzhou.aliyuncs.com/google_containers/storage-provisioner:v4],SizeBytes:29683712,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2 registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
W0104 05:51:32.926060 1 controller_utils.go:148] Failed to update status for pod "kube-scheduler-minikube_kube-system(e5063e55-d3a0-4bf8-8395-993ae553867b)": etcdserver: request timed out
I0104 05:51:32.929978 1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
E0104 05:51:33.131534 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes-dashboard-7db476d994-w2qgq.1656f188983ff485", GenerateName:"", Namespace:"kubernetes-dashboard", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-7db476d994-w2qgq", UID:"d1cf5e3b-030a-45ec-bc35-3fb6525c9cdb", APIVersion:"v1", ResourceVersion:"7339", FieldPath:""}, Reason:"NodeNotReady", Message:"Node is not ready", Source:v1.EventSource{Component:"node-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbff4c9176fe79285, ext:6981310912744, loc:(time.Location)(0x6f2f340)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbff4c9176fe79285, ext:6981310912744, loc:(time.Location)(0x6f2f340)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'etcdserver: request timed out' (will not retry!)
I0104 05:51:33.716725 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service kubernetes-dashboard/kubernetes-dashboard: failed to update kubernetes-dashboard-slfsr EndpointSlice for Service kubernetes-dashboard/kubernetes-dashboard: etcdserver: request timed out"
W0104 05:51:33.722942 1 endpointslice_controller.go:284] Error syncing endpoint slices for service "kubernetes-dashboard/kubernetes-dashboard", retrying. Error: failed to update kubernetes-dashboard-slfsr EndpointSlice for Service kubernetes-dashboard/kubernetes-dashboard: etcdserver: request timed out
I0104 05:51:33.734933 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kubernetes-dashboard/kubernetes-dashboard: etcdserver: request timed out"
W0104 05:51:36.889551 1 endpointslice_controller.go:284] Error syncing endpoint slices for service "kubernetes-dashboard/kubernetes-dashboard", retrying. Error: failed to update kubernetes-dashboard-slfsr EndpointSlice for Service kubernetes-dashboard/kubernetes-dashboard: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kubernetes-dashboard-slfsr": the object has been modified; please apply your changes to the latest version and try again
I0104 05:51:36.891162 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service kubernetes-dashboard/kubernetes-dashboard: failed to update kubernetes-dashboard-slfsr EndpointSlice for Service kubernetes-dashboard/kubernetes-dashboard: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kubernetes-dashboard-slfsr\": the object has been modified; please apply your changes to the latest version and try again"
I0104 05:51:38.747356 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kubernetes-dashboard/kubernetes-dashboard: Operation cannot be fulfilled on endpoints \"kubernetes-dashboard\": the object has been modified; please apply your changes to the latest version and try again"
I0104 05:51:40.024514 1 event.go:291] "Event occurred" object="kube-system/etcd-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0104 05:51:40.581824 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kubernetes-dashboard/kubernetes-dashboard: Operation cannot be fulfilled on endpoints \"kubernetes-dashboard\": the object has been modified; please apply your changes to the latest version and try again"
I0104 05:51:42.067734 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8-cddgs" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
W0104 05:51:43.088125 1 controller_utils.go:148] Failed to update status for pod "ingress-nginx-controller-5f568d55f8-ljgnf_kube-system(42a3d73b-e3b0-45d4-817d-e1f75d628e9a)": Operation cannot be fulfilled on pods "ingress-nginx-controller-5f568d55f8-ljgnf": the object has been modified; please apply your changes to the latest version and try again
I0104 05:51:43.088536 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-controller-5f568d55f8-ljgnf" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
W0104 05:51:44.383956 1 controller_utils.go:148] Failed to update status for pod "kube-apiserver-minikube_kube-system(514284ef-4bc1-476c-8f60-3fe18a5c4877)": Operation cannot be fulfilled on pods "kube-apiserver-minikube": the object has been modified; please apply your changes to the latest version and try again
I0104 05:51:44.384364 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0104 05:51:44.666036 1 event.go:291] "Event occurred" object="kube-system/kube-proxy-52x27" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0104 05:51:45.626452 1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
W0104 05:51:46.618221 1 controller_utils.go:148] Failed to update status for pod "coredns-54d67798b7-flmh4_kube-system(24974688-064d-4fdb-b652-4b6472e6e277)": Operation cannot be fulfilled on pods "coredns-54d67798b7-flmh4": the object has been modified; please apply your changes to the latest version and try again
I0104 05:51:46.618898 1 event.go:291] "Event occurred" object="kube-system/coredns-54d67798b7-flmh4" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0104 05:51:46.917918 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-minikube" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
E0104 05:51:47.768626 1 node_lifecycle_controller.go:847] unable to mark all pods NotReady on node minikube: etcdserver: request timed out; Operation cannot be fulfilled on pods "ingress-nginx-controller-5f568d55f8-ljgnf": the object has been modified; please apply your changes to the latest version and try again; Operation cannot be fulfilled on pods "kube-apiserver-minikube": the object has been modified; please apply your changes to the latest version and try again; Operation cannot be fulfilled on pods "coredns-54d67798b7-flmh4": the object has been modified; please apply your changes to the latest version and try again; queuing for retry
I0104 05:51:47.768788 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0104 05:51:53.232457 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
==> kube-controller-manager [e15e5d08eb1f] <== I0104 01:50:36.677736 1 shared_informer.go:247] Caches are synced for cidrallocator I0104 01:50:36.681625 1 shared_informer.go:247] Caches are synced for HPA I0104 01:50:36.682166 1 shared_informer.go:247] Caches are synced for deployment I0104 01:50:36.682901 1 shared_informer.go:247] Caches are synced for endpoint_slice I0104 01:50:36.684259 1 shared_informer.go:247] Caches are synced for resource quota I0104 01:50:36.685029 1 shared_informer.go:247] Caches are synced for PV protection I0104 01:50:36.689041 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0104 01:50:36.691754 1 shared_informer.go:247] Caches are synced for crt configmap I0104 01:50:36.692807 1 shared_informer.go:247] Caches are synced for persistent volume I0104 01:50:36.695531 1 shared_informer.go:247] Caches are synced for resource quota I0104 01:50:36.697960 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0104 01:50:36.699632 1 shared_informer.go:247] Caches are synced for attach detach I0104 01:50:36.702216 1 shared_informer.go:247] Caches are synced for service account I0104 01:50:36.704340 1 shared_informer.go:247] Caches are synced for TTL I0104 01:50:36.706464 1 shared_informer.go:247] Caches are synced for endpoint I0104 01:50:36.710051 1 shared_informer.go:247] Caches are synced for job I0104 01:50:36.720451 1 shared_informer.go:247] Caches are synced for GC I0104 01:50:36.732259 1 shared_informer.go:247] Caches are synced for taint I0104 01:50:36.732367 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: W0104 01:50:36.732516 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. I0104 01:50:36.732551 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal. I0104 01:50:36.732518 1 taint_manager.go:187] Starting NoExecuteTaintManager I0104 01:50:36.732731 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0104 01:50:36.964070 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-9l5zw" I0104 01:50:36.967076 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-54d67798b7 to 1" I0104 01:50:37.134731 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0104 01:50:37.235071 1 shared_informer.go:247] Caches are synced for garbage collector I0104 01:50:37.243337 1 shared_informer.go:247] Caches are synced for garbage collector I0104 01:50:37.243489 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0104 01:50:37.342649 1 event.go:291] "Event occurred" object="kube-system/coredns-54d67798b7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-54d67798b7-flmh4" I0104 01:50:37.347333 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 0" I0104 01:50:37.977219 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-6p544" I0104 01:50:42.835026 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-52x27" I0104 02:50:36.601876 1 cleaner.go:180] Cleaning CSR "csr-jkgc6" as it is more than 1h0m0s old and approved. I0104 03:13:49.044443 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set ingress-nginx-controller-5f568d55f8 to 1" I0104 03:13:49.186908 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-controller-5f568d55f8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-controller-5f568d55f8-ljgnf" I0104 03:13:49.381440 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set ingress-nginx-controller-558664778f to 0" I0104 03:13:53.196853 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-controller-558664778f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: ingress-nginx-controller-558664778f-7l9bc" I0104 03:31:13.480706 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-c85578d8 to 1" I0104 03:31:13.577868 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-7db476d994 to 1" I0104 03:31:14.226657 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-c85578d8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" I0104 03:31:14.226691 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-7db476d994-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" E0104 03:31:14.511883 1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-7db476d994" failed with pods "kubernetes-dashboard-7db476d994-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found E0104 03:31:14.512335 1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" failed with pods "dashboard-metrics-scraper-c85578d8-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found E0104 03:31:14.568087 1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-7db476d994" failed with pods "kubernetes-dashboard-7db476d994-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found I0104 03:31:14.568712 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-7db476d994-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" E0104 03:31:14.601358 1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" failed with pods "dashboard-metrics-scraper-c85578d8-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found I0104 03:31:14.601733 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-c85578d8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" E0104 03:31:14.647617 1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" failed with pods "dashboard-metrics-scraper-c85578d8-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found I0104 03:31:14.648070 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-c85578d8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" E0104 03:31:14.648130 1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-7db476d994" failed with pods "kubernetes-dashboard-7db476d994-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found I0104 03:31:14.648163 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-7db476d994-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" E0104 03:31:14.659196 1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-7db476d994" failed with pods "kubernetes-dashboard-7db476d994-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found E0104 03:31:14.659426 1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" failed with pods "dashboard-metrics-scraper-c85578d8-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found I0104 03:31:14.659492 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-c85578d8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" I0104 03:31:14.659518 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-7db476d994-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" I0104 03:31:15.716686 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-7db476d994" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-7db476d994-w2qgq" I0104 03:31:16.669461 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c85578d8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-c85578d8-cddgs" I0104 03:36:34.866398 1 request.go:655] Throttling request took 1.006546132s, request: GET:https://192.168.49.2:8443/apis/scheduling.k8s.io/v1?timeout=32s I0104 03:40:17.299640 1 request.go:655] Throttling request took 2.484930458s, request: GET:https://192.168.49.2:8443/apis/policy/v1beta1?timeout=32s
==> kube-proxy [4696e5f573d1] <== I0104 03:56:44.665955 1 node.go:172] Successfully retrieved node IP: 192.168.49.2 I0104 03:56:44.666036 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation W0104 03:56:46.314259 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy I0104 03:56:46.314579 1 server_others.go:185] Using iptables Proxier. I0104 03:56:46.318455 1 server.go:650] Version: v1.20.0 I0104 03:56:46.331137 1 conntrack.go:52] Setting nf_conntrack_max to 131072 E0104 03:56:46.333022 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro seclabel nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro seclabel nosuid nodev noexec relatime]) I0104 03:56:46.334248 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0104 03:56:46.334387 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0104 03:56:46.334681 1 config.go:224] Starting endpoint slice config controller I0104 03:56:46.334780 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0104 03:56:46.335911 1 config.go:315] Starting service config controller I0104 03:56:46.335950 1 shared_informer.go:240] Waiting for caches to sync for service config I0104 03:56:46.435136 1 shared_informer.go:247] Caches are synced for endpoint slice config I0104 03:56:46.436189 1 shared_informer.go:247] Caches are synced for service config I0104 05:40:57.378096 1 trace.go:205] Trace[1014802503]: "iptables Monitor CANARY check" (04-Jan-2021 05:40:48.143) (total time: 3934ms): Trace[1014802503]: [3.934747153s] [3.934747153s] END I0104 05:49:10.774995 1 trace.go:205] Trace[927657966]: "iptables Monitor CANARY check" (04-Jan-2021 05:48:16.374) (total time: 44374ms): Trace[927657966]: [44.374723856s] [44.374723856s] END W0104 05:49:10.775105 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding W0104 05:49:10.776168 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of v1beta1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding I0104 05:49:50.673339 1 trace.go:205] Trace[192614949]: "iptables Monitor CANARY check" (04-Jan-2021 05:49:16.375) (total time: 34108ms): Trace[192614949]: [34.108277271s] [34.108277271s] END I0104 05:50:06.235222 1 trace.go:205] Trace[29623307]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:49:28.421) (total time: 36851ms): Trace[29623307]: [36.851950879s] [36.851950879s] END E0104 05:50:06.241003 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1beta1.EndpointSlice: failed to list v1beta1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1beta1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=12592": net/http: TLS handshake timeout I0104 05:50:06.218408 1 trace.go:205] Trace[1762882257]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:49:28.421) (total time: 36852ms): Trace[1762882257]: [36.852680996s] [36.852680996s] END E0104 05:50:06.256094 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Service: failed to list v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=12034": net/http: TLS handshake timeout I0104 05:50:25.336298 1 trace.go:205] Trace[2028724510]: "iptables Monitor CANARY check" (04-Jan-2021 05:50:20.070) (total time: 3551ms): Trace[2028724510]: [3.551877082s] [3.551877082s] END I0104 05:50:52.442863 1 trace.go:205] Trace[678228614]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:09.473) (total time: 42968ms): Trace[678228614]: ---"Objects listed" 42966ms (05:50:00.439) Trace[678228614]: [42.968767252s] [42.968767252s] END I0104 05:50:52.521778 1 trace.go:205] Trace[1624996231]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:09.438) (total time: 43083ms): Trace[1624996231]: ---"Objects listed" 43083ms (05:50:00.521) Trace[1624996231]: [43.083527556s] [43.083527556s] END I0104 05:50:52.522335 1 trace.go:205] Trace[76171960]: "iptables Monitor CANARY check" (04-Jan-2021 05:50:46.979) (total time: 5542ms): Trace[76171960]: [5.542746573s] [5.542746573s] END I0104 05:51:25.991677 1 trace.go:205] Trace[2115852454]: "iptables Monitor CANARY check" (04-Jan-2021 05:51:18.168) (total time: 7006ms): Trace[2115852454]: [7.006086873s] [7.006086873s] END I0104 05:51:54.466896 1 trace.go:205] Trace[1804734078]: "iptables save" (04-Jan-2021 05:51:49.403) (total time: 4585ms): Trace[1804734078]: [4.585533359s] [4.585533359s] END I0104 05:52:19.905254 1 trace.go:205] Trace[885167154]: "iptables Monitor CANARY check" (04-Jan-2021 05:52:17.093) (total time: 2648ms): Trace[885167154]: [2.64890284s] [2.64890284s] END
==> kube-proxy [d6d9332a48d6] <== I0104 01:50:44.224170 1 node.go:172] Successfully retrieved node IP: 192.168.49.2 I0104 01:50:44.224255 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.49.2), assume IPv4 operation W0104 01:50:44.236321 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy I0104 01:50:44.236422 1 server_others.go:185] Using iptables Proxier. I0104 01:50:44.236712 1 server.go:650] Version: v1.20.0 I0104 01:50:44.237093 1 conntrack.go:52] Setting nf_conntrack_max to 131072 E0104 01:50:44.237315 1 conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro seclabel nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro seclabel nosuid nodev noexec relatime]) I0104 01:50:44.237634 1 config.go:224] Starting endpoint slice config controller I0104 01:50:44.237648 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0104 01:50:44.238159 1 config.go:315] Starting service config controller I0104 01:50:44.238179 1 shared_informer.go:240] Waiting for caches to sync for service config I0104 01:50:44.337870 1 shared_informer.go:247] Caches are synced for endpoint slice config I0104 01:50:44.338315 1 shared_informer.go:247] Caches are synced for service config I0104 02:53:34.690338 1 trace.go:205] Trace[1091868479]: "iptables Monitor CANARY check" (04-Jan-2021 02:53:14.342) (total time: 2204ms): Trace[1091868479]: [2.204184875s] [2.204184875s] END I0104 03:14:47.091635 1 trace.go:205] Trace[1378100380]: "iptables Monitor CANARY check" (04-Jan-2021 03:14:44.298) (total time: 2793ms): Trace[1378100380]: [2.793109775s] [2.793109775s] END I0104 03:15:20.596802 1 trace.go:205] Trace[1562846380]: "iptables save" (04-Jan-2021 03:15:17.926) (total time: 2670ms): Trace[1562846380]: [2.670643269s] [2.670643269s] END I0104 03:39:20.386624 1 trace.go:205] Trace[557164409]: "iptables Monitor CANARY check" (04-Jan-2021 03:39:14.362) (total time: 5065ms): Trace[557164409]: [5.065846813s] [5.065846813s] END I0104 03:40:46.464114 1 trace.go:205] Trace[1174525203]: "iptables Monitor CANARY check" (04-Jan-2021 03:40:44.246) (total time: 2217ms): Trace[1174525203]: [2.217110403s] [2.217110403s] END I0104 03:41:50.350327 1 trace.go:205] Trace[792024658]: "iptables Monitor CANARY check" (04-Jan-2021 03:41:48.163) (total time: 2107ms): Trace[792024658]: [2.107175025s] [2.107175025s] END
==> kube-scheduler [384b571d20e7] <== Trace[677532092]: [18.643948748s] [18.643948748s] END E0104 05:49:59.700823 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Service: failed to list v1.Service: Get "https://192.168.49.2:8443/api/v1/services?resourceVersion=12034": net/http: TLS handshake timeout I0104 05:49:59.700882 1 trace.go:205] Trace[1478561665]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:49:41.570) (total time: 18130ms): Trace[1478561665]: [18.13066602s] [18.13066602s] END E0104 05:49:59.700898 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.PersistentVolumeClaim: failed to list v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?resourceVersion=7141": net/http: TLS handshake timeout I0104 05:49:59.700940 1 trace.go:205] Trace[1742577586]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:49:36.061) (total time: 23639ms): Trace[1742577586]: [23.63902387s] [23.63902387s] END E0104 05:49:59.700950 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.ReplicationController: failed to list v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?resourceVersion=7141": net/http: TLS handshake timeout I0104 05:49:59.700991 1 trace.go:205] Trace[998345353]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:49:36.712) (total time: 22988ms): Trace[998345353]: [22.988358031s] [22.988358031s] END E0104 05:49:59.700999 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Node: failed to list v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?resourceVersion=12884": net/http: TLS handshake timeout I0104 05:49:59.701012 1 trace.go:205] Trace[1295427307]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:49:36.829) (total time: 22871ms): Trace[1295427307]: [22.871878721s] [22.871878721s] END E0104 05:49:59.701017 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.StorageClass: failed to list v1.StorageClass: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=7141": net/http: TLS handshake timeout I0104 05:49:59.706042 1 trace.go:205] Trace[1912227666]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:49:37.665) (total time: 21791ms): Trace[1912227666]: [21.79132961s] [21.79132961s] END E0104 05:49:59.706090 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.StatefulSet: failed to list v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?resourceVersion=7141": net/http: TLS handshake timeout E0104 05:49:59.706103 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Pod: failed to list v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&resourceVersion=12843": net/http: TLS handshake timeout I0104 05:49:59.743954 1 trace.go:205] Trace[553526685]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:49:36.508) (total time: 23235ms): Trace[553526685]: [23.235347031s] [23.235347031s] END E0104 05:49:59.743983 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.PersistentVolume: failed to list v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?resourceVersion=7141": net/http: TLS handshake timeout I0104 05:49:59.744823 1 trace.go:205] Trace[1420074582]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:49:45.011) (total time: 14732ms): Trace[1420074582]: [14.732923534s] [14.732923534s] END E0104 05:49:59.744849 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.ReplicaSet: failed to list v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?resourceVersion=7340": net/http: TLS handshake timeout I0104 05:49:59.809306 1 trace.go:205] Trace[1261916011]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:49:45.942) (total time: 13866ms): Trace[1261916011]: [13.866775977s] [13.866775977s] END E0104 05:49:59.809432 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.CSINode: failed to list v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=7141": net/http: TLS handshake timeout I0104 05:50:48.477848 1 trace.go:205] Trace[1400210782]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:04.708) (total time: 43769ms): Trace[1400210782]: ---"Objects listed" 43769ms (05:50:00.477) Trace[1400210782]: [43.769643813s] [43.769643813s] END I0104 05:50:48.491784 1 trace.go:205] Trace[1944600317]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:04.636) (total time: 43181ms): Trace[1944600317]: ---"Objects listed" 43157ms (05:50:00.794) Trace[1944600317]: [43.181372892s] [43.181372892s] END I0104 05:50:48.491986 1 trace.go:205] Trace[2002680613]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:04.912) (total time: 42904ms): Trace[2002680613]: ---"Objects listed" 42881ms (05:50:00.794) Trace[2002680613]: [42.90492678s] [42.90492678s] END I0104 05:50:48.492033 1 trace.go:205] Trace[1241083724]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:04.919) (total time: 42899ms): Trace[1241083724]: ---"Objects listed" 42899ms (05:50:00.818) Trace[1241083724]: [42.899052242s] [42.899052242s] END I0104 05:50:48.492121 1 trace.go:205] Trace[1084655922]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:04.913) (total time: 43578ms): Trace[1084655922]: ---"Objects listed" 43578ms (05:50:00.492) Trace[1084655922]: [43.578864301s] [43.578864301s] END I0104 05:50:48.493183 1 trace.go:205] Trace[131785297]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:05.906) (total time: 42506ms): Trace[131785297]: ---"Objects listed" 42506ms (05:50:00.412) Trace[131785297]: [42.50615587s] [42.50615587s] END I0104 05:50:48.493431 1 trace.go:205] Trace[209573777]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:05.907) (total time: 42122ms): Trace[209573777]: ---"Objects listed" 42122ms (05:50:00.029) Trace[209573777]: [42.122344357s] [42.122344357s] END I0104 05:50:48.493469 1 trace.go:205] Trace[538634179]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:05.906) (total time: 42508ms): Trace[538634179]: ---"Objects listed" 42508ms (05:50:00.415) Trace[538634179]: [42.508532977s] [42.508532977s] END I0104 05:50:48.493570 1 trace.go:205] Trace[2033820737]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:04.912) (total time: 43581ms): Trace[2033820737]: ---"Objects listed" 43581ms (05:50:00.493) Trace[2033820737]: [43.58148026s] [43.58148026s] END I0104 05:50:48.493998 1 trace.go:205] Trace[1532019399]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:03.068) (total time: 45425ms): Trace[1532019399]: ---"Objects listed" 45425ms (05:50:00.493) Trace[1532019399]: [45.42540377s] [45.42540377s] END I0104 05:50:48.496493 1 trace.go:205] Trace[2089855722]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (04-Jan-2021 05:50:06.199) (total time: 42296ms): Trace[2089855722]: ---"Objects listed" 42296ms (05:50:00.496) Trace[2089855722]: [42.296629682s] [42.296629682s] END
==> kube-scheduler [6491fc7a9e1a] <== I0104 01:50:06.022615 1 serving.go:331] Generated self-signed cert in-memory W0104 01:50:12.988051 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0104 01:50:12.988113 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0104 01:50:12.988127 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous. W0104 01:50:12.988135 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0104 01:50:13.074053 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0104 01:50:13.074106 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0104 01:50:13.074457 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I0104 01:50:13.074531 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0104 01:50:13.174736 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file E0104 01:50:21.498842 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.ReplicaSet: unknown (get replicasets.apps) E0104 01:50:21.498990 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch v1.ConfigMap: unknown (get configmaps) E0104 01:50:21.499025 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.ReplicationController: unknown (get replicationcontrollers) E0104 01:50:21.499085 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1beta1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy) E0104 01:50:21.499140 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.StorageClass: unknown (get storageclasses.storage.k8s.io) E0104 01:50:21.499176 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Service: unknown (get services) E0104 01:50:21.499205 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Node: unknown (get nodes) E0104 01:50:21.499232 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.PersistentVolume: unknown (get persistentvolumes) E0104 01:50:21.502203 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Pod: unknown (get pods) E0104 01:50:21.502269 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.StatefulSet: unknown (get statefulsets.apps) E0104 01:50:21.502300 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims) E0104 01:50:21.502329 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.CSINode: unknown (get csinodes.storage.k8s.io) I0104 03:13:53.045749 1 trace.go:205] Trace[1625743267]: "Scheduling" namespace:kube-system,name:ingress-nginx-controller-5f568d55f8-ljgnf (04-Jan-2021 03:13:49.689) (total time: 697ms): Trace[1625743267]: ---"Computing predicates done" 697ms (03:13:00.386) Trace[1625743267]: [697.215551ms] [697.215551ms] END
==> kubelet <== -- Logs begin at Mon 2021-01-04 03:54:14 UTC, end at Mon 2021-01-04 07:28:49 UTC. -- Jan 04 07:22:30 minikube kubelet[1038]: E0104 07:22:30.865331 1038 kuberuntime_image.go:51] Pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:22:30 minikube kubelet[1038]: E0104 07:22:30.865605 1038 kuberuntime_manager.go:829] container &Container{Name:minikube-ingress-dns,Image:registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:53,ContainerPort:53,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:DNS_PORT,Value:53,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube-ingress-dns-token-qhd4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:22:30 minikube kubelet[1038]: E0104 07:22:30.865718 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied" Jan 04 07:22:31 minikube kubelet[1038]: E0104 07:22:31.290689 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:22:45 minikube kubelet[1038]: E0104 07:22:45.353991 1038 remote_image.go:113] PullImage "registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:22:45 minikube kubelet[1038]: E0104 07:22:45.354083 1038 kuberuntime_image.go:51] Pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:22:45 minikube kubelet[1038]: E0104 07:22:45.354443 1038 kuberuntime_manager.go:829] container &Container{Name:minikube-ingress-dns,Image:registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:53,ContainerPort:53,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:DNS_PORT,Value:53,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube-ingress-dns-token-qhd4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:22:45 minikube kubelet[1038]: E0104 07:22:45.354576 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied" Jan 04 07:23:00 minikube kubelet[1038]: E0104 07:23:00.081967 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:23:08 minikube kubelet[1038]: W0104 07:23:08.670189 1038 container.go:549] Failed to update stats for container "/docker/799bc251ab42c92286a9540a5b72be396cc89c1e507940524299dddbd69dc0f4/kubepods/besteffort/podfbb7e890-91ed-4eed-b4a8-3e407102da7f/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36": unable to determine device info for dir: /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: stat failed on /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff with error: no such file or directory, continuing to push stats Jan 04 07:23:14 minikube kubelet[1038]: E0104 07:23:14.977181 1038 remote_image.go:113] PullImage "registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:23:14 minikube kubelet[1038]: E0104 07:23:14.977263 1038 kuberuntime_image.go:51] Pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:23:14 minikube kubelet[1038]: E0104 07:23:14.977364 1038 kuberuntime_manager.go:829] container &Container{Name:minikube-ingress-dns,Image:registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:53,ContainerPort:53,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:DNS_PORT,Value:53,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube-ingress-dns-token-qhd4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:23:14 minikube kubelet[1038]: E0104 07:23:14.977393 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied" Jan 04 07:23:20 minikube kubelet[1038]: W0104 07:23:20.884239 1038 container.go:549] Failed to update stats for container "/kubepods/besteffort/podfbb7e890-91ed-4eed-b4a8-3e407102da7f/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36": unable to determine device info for dir: /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: stat failed on /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff with error: no such file or directory, continuing to push stats Jan 04 07:23:30 minikube kubelet[1038]: E0104 07:23:30.073452 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:23:44 minikube kubelet[1038]: E0104 07:23:44.072235 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:23:57 minikube kubelet[1038]: E0104 07:23:57.874907 1038 remote_image.go:113] PullImage "registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:23:57 minikube kubelet[1038]: E0104 07:23:57.874955 1038 kuberuntime_image.go:51] Pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:23:57 minikube kubelet[1038]: E0104 07:23:57.875030 1038 kuberuntime_manager.go:829] container &Container{Name:minikube-ingress-dns,Image:registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:53,ContainerPort:53,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:DNS_PORT,Value:53,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube-ingress-dns-token-qhd4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:23:57 minikube kubelet[1038]: E0104 07:23:57.875057 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied" Jan 04 07:23:58 minikube kubelet[1038]: E0104 07:23:58.223443 1038 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff" to get inode usage: stat /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36" to get inode usage: stat /var/lib/docker/containers/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36: no such file or directory Jan 04 07:24:11 minikube kubelet[1038]: E0104 07:24:11.830537 1038 fsHandler.go:114] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff" to get inode usage: stat /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36" to get inode usage: stat /var/lib/docker/containers/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36: no such file or directory Jan 04 07:24:13 minikube kubelet[1038]: E0104 07:24:13.068745 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:24:17 minikube kubelet[1038]: W0104 07:24:17.814505 1038 container.go:549] Failed to update stats for container "/docker/799bc251ab42c92286a9540a5b72be396cc89c1e507940524299dddbd69dc0f4/kubepods/besteffort/podfbb7e890-91ed-4eed-b4a8-3e407102da7f/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36": unable to determine device info for dir: /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: stat failed on /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff with error: no such file or directory, continuing to push stats Jan 04 07:24:24 minikube kubelet[1038]: W0104 07:24:24.705064 1038 container.go:549] Failed to update stats for container "/kubepods/besteffort/podfbb7e890-91ed-4eed-b4a8-3e407102da7f/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36": unable to determine device info for dir: /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: stat failed on /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff with error: no such file or directory, continuing to push stats Jan 04 07:24:28 minikube kubelet[1038]: E0104 07:24:28.069054 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:24:42 minikube kubelet[1038]: E0104 07:24:42.069227 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:24:54 minikube kubelet[1038]: E0104 07:24:54.070729 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:25:06 minikube kubelet[1038]: E0104 07:25:06.073241 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:25:19 minikube kubelet[1038]: E0104 07:25:19.240297 1038 remote_image.go:113] PullImage "registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:25:19 minikube kubelet[1038]: E0104 07:25:19.240336 1038 kuberuntime_image.go:51] Pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:25:19 minikube kubelet[1038]: E0104 07:25:19.240438 1038 kuberuntime_manager.go:829] container &Container{Name:minikube-ingress-dns,Image:registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:53,ContainerPort:53,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:DNS_PORT,Value:53,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube-ingress-dns-token-qhd4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:25:19 minikube kubelet[1038]: E0104 07:25:19.240468 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied" Jan 04 07:25:21 minikube kubelet[1038]: W0104 07:25:21.087689 1038 container.go:549] Failed to update stats for container "/docker/799bc251ab42c92286a9540a5b72be396cc89c1e507940524299dddbd69dc0f4/kubepods/besteffort/podfbb7e890-91ed-4eed-b4a8-3e407102da7f/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36": unable to determine device info for dir: /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: stat failed on /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff with error: no such file or directory, continuing to push stats Jan 04 07:25:29 minikube kubelet[1038]: W0104 07:25:29.580258 1038 container.go:549] Failed to update stats for container "/kubepods/besteffort/podfbb7e890-91ed-4eed-b4a8-3e407102da7f/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36": unable to determine device info for dir: /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: stat failed on /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff with error: no such file or directory, continuing to push stats Jan 04 07:25:33 minikube kubelet[1038]: E0104 07:25:33.067163 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:25:46 minikube kubelet[1038]: E0104 07:25:46.071206 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:25:57 minikube kubelet[1038]: E0104 07:25:57.067942 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:26:11 minikube kubelet[1038]: E0104 07:26:11.068481 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:26:26 minikube kubelet[1038]: E0104 07:26:26.067927 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:26:27 minikube kubelet[1038]: W0104 07:26:27.094219 1038 container.go:549] Failed to update stats for container "/docker/799bc251ab42c92286a9540a5b72be396cc89c1e507940524299dddbd69dc0f4/kubepods/besteffort/podfbb7e890-91ed-4eed-b4a8-3e407102da7f/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36": unable to determine device info for dir: /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: stat failed on /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff with error: no such file or directory, continuing to push stats Jan 04 07:26:38 minikube kubelet[1038]: E0104 07:26:38.067280 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:26:46 minikube kubelet[1038]: W0104 07:26:46.534484 1038 container.go:549] Failed to update stats for container "/kubepods/besteffort/podfbb7e890-91ed-4eed-b4a8-3e407102da7f/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36": unable to determine device info for dir: /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: stat failed on /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff with error: no such file or directory, continuing to push stats Jan 04 07:26:49 minikube kubelet[1038]: E0104 07:26:49.068081 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:27:03 minikube kubelet[1038]: E0104 07:27:03.067435 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:27:18 minikube kubelet[1038]: E0104 07:27:18.068224 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:27:30 minikube kubelet[1038]: E0104 07:27:30.069368 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:27:38 minikube kubelet[1038]: W0104 07:27:38.093388 1038 container.go:549] Failed to update stats for container "/docker/799bc251ab42c92286a9540a5b72be396cc89c1e507940524299dddbd69dc0f4/kubepods/besteffort/podfbb7e890-91ed-4eed-b4a8-3e407102da7f/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36": unable to determine device info for dir: /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: stat failed on /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff with error: no such file or directory, continuing to push stats Jan 04 07:27:41 minikube kubelet[1038]: E0104 07:27:41.068865 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:27:55 minikube kubelet[1038]: E0104 07:27:55.071449 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:28:06 minikube kubelet[1038]: W0104 07:28:06.825942 1038 container.go:549] Failed to update stats for container "/kubepods/besteffort/podfbb7e890-91ed-4eed-b4a8-3e407102da7f/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36": unable to determine device info for dir: /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: stat failed on /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff with error: no such file or directory, continuing to push stats Jan 04 07:28:07 minikube kubelet[1038]: E0104 07:28:07.934691 1038 remote_image.go:113] PullImage "registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0" from image service failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:28:07 minikube kubelet[1038]: E0104 07:28:07.934774 1038 kuberuntime_image.go:51] Pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0" failed: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:28:07 minikube kubelet[1038]: E0104 07:28:07.934948 1038 kuberuntime_manager.go:829] container &Container{Name:minikube-ingress-dns,Image:registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:53,ContainerPort:53,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:DNS_PORT,Value:53,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:minikube-ingress-dns-token-qhd4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied Jan 04 07:28:07 minikube kubelet[1038]: E0104 07:28:07.934998 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied" Jan 04 07:28:19 minikube kubelet[1038]: E0104 07:28:19.069166 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:28:32 minikube kubelet[1038]: E0104 07:28:32.069925 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\"" Jan 04 07:28:39 minikube kubelet[1038]: W0104 07:28:39.008157 1038 container.go:549] Failed to update stats for container "/docker/799bc251ab42c92286a9540a5b72be396cc89c1e507940524299dddbd69dc0f4/kubepods/besteffort/podfbb7e890-91ed-4eed-b4a8-3e407102da7f/ddafb2f756b2b37ef8fbf3ee4af4148cac8e22e50b4eb2eb4b200f3c51bfac36": unable to determine device info for dir: /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff: stat failed on /var/lib/docker/overlay2/bc0d00a3139d8c4aa28df96add5e68da1845cb8b2735b1a41e07c71d2ddd7cf8/diff with error: no such file or directory, continuing to push stats Jan 04 07:28:43 minikube kubelet[1038]: E0104 07:28:43.068115 1038 pod_workers.go:191] Error syncing pod 25208aca-28ad-4de5-a0bc-9f123f743b7b ("kube-ingress-dns-minikube_kube-system(25208aca-28ad-4de5-a0bc-9f123f743b7b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImagePullBackOff: "Back-off pulling image \"registry.cn-hangzhou.aliyuncs.com/google_containers/minikube-ingress-dns:0.3.0\""
==> kubernetes-dashboard [0ab465a2f9d0] <== 2021/01/04 07:27:16 [2021-01-04T07:27:16Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:27:16 Getting list of all deployments in the cluster 2021/01/04 07:27:16 [2021-01-04T07:27:16Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:27:21 [2021-01-04T07:27:21Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:27:21 Getting list of all deployments in the cluster 2021/01/04 07:27:21 [2021-01-04T07:27:21Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:27:26 [2021-01-04T07:27:26Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:27:26 Getting list of all deployments in the cluster 2021/01/04 07:27:26 [2021-01-04T07:27:26Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:27:31 [2021-01-04T07:27:31Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:27:31 Getting list of all deployments in the cluster 2021/01/04 07:27:31 [2021-01-04T07:27:31Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:27:36 [2021-01-04T07:27:36Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:27:36 Getting list of all deployments in the cluster 2021/01/04 07:27:36 [2021-01-04T07:27:36Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:27:41 [2021-01-04T07:27:41Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:27:41 Getting list of all deployments in the cluster 2021/01/04 07:27:41 [2021-01-04T07:27:41Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:27:46 [2021-01-04T07:27:46Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:27:46 Getting list of all deployments in the cluster 2021/01/04 07:27:46 [2021-01-04T07:27:46Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:27:51 [2021-01-04T07:27:51Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:27:51 Getting list of all deployments in the cluster 2021/01/04 07:27:51 [2021-01-04T07:27:51Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:27:56 [2021-01-04T07:27:56Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:27:56 Getting list of all deployments in the cluster 2021/01/04 07:27:56 [2021-01-04T07:27:56Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:28:01 [2021-01-04T07:28:01Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:28:01 Getting list of all deployments in the cluster 2021/01/04 07:28:01 [2021-01-04T07:28:01Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:28:06 [2021-01-04T07:28:06Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:28:06 Getting list of all deployments in the cluster 2021/01/04 07:28:06 [2021-01-04T07:28:06Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:28:11 [2021-01-04T07:28:11Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:28:11 Getting list of all deployments in the cluster 2021/01/04 07:28:11 [2021-01-04T07:28:11Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:28:16 [2021-01-04T07:28:16Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:28:16 Getting list of all deployments in the cluster 2021/01/04 07:28:16 [2021-01-04T07:28:16Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:28:21 [2021-01-04T07:28:21Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:28:21 Getting list of all deployments in the cluster 2021/01/04 07:28:21 [2021-01-04T07:28:21Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:28:26 [2021-01-04T07:28:26Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:28:26 Getting list of all deployments in the cluster 2021/01/04 07:28:26 [2021-01-04T07:28:26Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:28:31 [2021-01-04T07:28:31Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:28:31 Getting list of all deployments in the cluster 2021/01/04 07:28:31 [2021-01-04T07:28:31Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:28:36 [2021-01-04T07:28:36Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:28:36 Getting list of all deployments in the cluster 2021/01/04 07:28:36 [2021-01-04T07:28:36Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:28:41 [2021-01-04T07:28:41Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:28:41 Getting list of all deployments in the cluster 2021/01/04 07:28:41 [2021-01-04T07:28:41Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:28:46 [2021-01-04T07:28:46Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:28:46 Getting list of all deployments in the cluster 2021/01/04 07:28:46 [2021-01-04T07:28:46Z] Outcoming response to 127.0.0.1 with 200 status code 2021/01/04 07:28:51 [2021-01-04T07:28:51Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2021/01/04 07:28:51 Getting list of all deployments in the cluster 2021/01/04 07:28:51 [2021-01-04T07:28:51Z] Outcoming response to 127.0.0.1 with 200 status code
==> kubernetes-dashboard [51dc2ca7e232] <== 2021/01/04 03:56:43 Using namespace: kubernetes-dashboard 2021/01/04 03:56:43 Using in-cluster config to connect to apiserver 2021/01/04 03:56:43 Using secret token for csrf signing 2021/01/04 03:56:43 Initializing csrf token from kubernetes-dashboard-csrf secret 2021/01/04 03:56:43 Starting overwatch panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: connect: connection refused
goroutine 1 [running]: github.com/kubernetes/dashboard/src/app/backend/client/csrf.(csrfTokenManager).init(0xc0003b57c0) /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x413 github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...) /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66 github.com/kubernetes/dashboard/src/app/backend/client.(clientManager).initCSRFKey(0xc0004a6100) /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:502 +0xc6 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc0004a6100) /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:470 +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...) /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:551 main.main() /home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x21c
==> storage-provisioner [a6569795d896] <== I0104 05:51:19.646968 1 storage_provisioner.go:115] Initializing the minikube storage provisioner... I0104 05:51:19.704708 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service! I0104 05:51:19.704755 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0104 05:52:03.389817 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0104 05:52:03.389489 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8a81cca2-140f-445b-8220-42685b69ce40", APIVersion:"v1", ResourceVersion:"13016", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_8a8f4821-ab75-4133-ba21-ca0432d01eac became leader I0104 05:52:03.392192 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_8a8f4821-ab75-4133-ba21-ca0432d01eac! I0104 05:52:04.513767 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_8a8f4821-ab75-4133-ba21-ca0432d01eac! I0104 05:53:32.781825 1 leaderelection.go:288] failed to renew lease kube-system/k8s.io-minikube-hostpath: failed to tryAcquireOrRenew context deadline exceeded F0104 05:53:32.782616 1 controller.go:877] leaderelection lost I0104 05:53:37.288187 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8a81cca2-140f-445b-8220-42685b69ce40", APIVersion:"v1", ResourceVersion:"13106", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_8a8f4821-ab75-4133-ba21-ca0432d01eac stopped leading
==> storage-provisioner [c23fa743ea7b] <== I0104 05:54:10.126755 1 storage_provisioner.go:115] Initializing the minikube storage provisioner... I0104 05:54:10.173513 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service! I0104 05:54:10.173657 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0104 05:54:27.685124 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0104 05:54:27.685385 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_b197d332-32f0-4200-aa37-52e638c2811c! I0104 05:54:27.685584 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8a81cca2-140f-445b-8220-42685b69ce40", APIVersion:"v1", ResourceVersion:"13136", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_b197d332-32f0-4200-aa37-52e638c2811c became leader I0104 05:54:27.785539 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_b197d332-32f0-4200-aa37-52e638c2811c!
使用的操作系统版本: