Closed mynicolas closed 10 months ago
以上是磁盘io,三台master都是如此。
我从头做了下测试: 把所有服务stop,仅开etcd,不会有问题; 打开etcd,kube-apiserver,不会有问题; 打开etcd,kube-apiserver,kube-controller-manager,不会有问题; 打开etcd,kube-apiserver,kube-controller-manager,kube-scheduler,不会有问题; 打开etcd,kube-apiserver,kube-controller-manager,kube-scheduler,kube-proxy,不会有问题; 打开etcd,kube-apiserver,kube-controller-manager,kube-scheduler,kube-proxy,kubelet,突然磁盘io就打满了,重新关闭kubelet,io又恢复到0。 kubelet日志如下:
-- Unit kubelet.service has begun starting up.
Dec 05 15:12:51 k8s-master-0 systemd[1]: Started Kubernetes Kubelet.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.230908 52622 flags.go:64] FLAG: --address="0.0.0.0"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.230980 52622 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.230990 52622 flags.go:64] FLAG: --anonymous-auth="true"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.230995 52622 flags.go:64] FLAG: --application-metrics-count-limit="100"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231797 52622 flags.go:64] FLAG: --authentication-token-webhook="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231805 52622 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231811 52622 flags.go:64] FLAG: --authorization-mode="AlwaysAllow"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231816 52622 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231820 52622 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231824 52622 flags.go:64] FLAG: --azure-container-registry-config=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231828 52622 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231833 52622 flags.go:64] FLAG: --bootstrap-kubeconfig=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231837 52622 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231840 52622 flags.go:64] FLAG: --cgroup-driver="cgroupfs"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231844 52622 flags.go:64] FLAG: --cgroup-root=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231847 52622 flags.go:64] FLAG: --cgroups-per-qos="true"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231851 52622 flags.go:64] FLAG: --client-ca-file=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231854 52622 flags.go:64] FLAG: --cloud-config=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231857 52622 flags.go:64] FLAG: --cloud-provider=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231860 52622 flags.go:64] FLAG: --cluster-dns="[]"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231866 52622 flags.go:64] FLAG: --cluster-domain=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231870 52622 flags.go:64] FLAG: --config="/var/lib/kubelet/config.yaml"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231873 52622 flags.go:64] FLAG: --config-dir=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231878 52622 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231882 52622 flags.go:64] FLAG: --container-log-max-files="5"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231895 52622 flags.go:64] FLAG: --container-log-max-size="10Mi"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231898 52622 flags.go:64] FLAG: --container-runtime-endpoint="unix:///run/containerd/containerd.sock"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231902 52622 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231911 52622 flags.go:64] FLAG: --containerd-namespace="k8s.io"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231919 52622 flags.go:64] FLAG: --contention-profiling="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231924 52622 flags.go:64] FLAG: --cpu-cfs-quota="true"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231931 52622 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231935 52622 flags.go:64] FLAG: --cpu-manager-policy="none"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231938 52622 flags.go:64] FLAG: --cpu-manager-policy-options=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231943 52622 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231947 52622 flags.go:64] FLAG: --enable-controller-attach-detach="true"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231951 52622 flags.go:64] FLAG: --enable-debugging-handlers="true"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231954 52622 flags.go:64] FLAG: --enable-load-reader="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231958 52622 flags.go:64] FLAG: --enable-server="true"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231961 52622 flags.go:64] FLAG: --enforce-node-allocatable="[pods]"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231967 52622 flags.go:64] FLAG: --event-burst="100"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231971 52622 flags.go:64] FLAG: --event-qps="50"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231974 52622 flags.go:64] FLAG: --event-storage-age-limit="default=0"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231978 52622 flags.go:64] FLAG: --event-storage-event-limit="default=0"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231981 52622 flags.go:64] FLAG: --eviction-hard=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231986 52622 flags.go:64] FLAG: --eviction-max-pod-grace-period="0"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231992 52622 flags.go:64] FLAG: --eviction-minimum-reclaim=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.231999 52622 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232002 52622 flags.go:64] FLAG: --eviction-soft=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232006 52622 flags.go:64] FLAG: --eviction-soft-grace-period=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232009 52622 flags.go:64] FLAG: --exit-on-lock-contention="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232012 52622 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232016 52622 flags.go:64] FLAG: --experimental-mounter-path=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232022 52622 flags.go:64] FLAG: --fail-swap-on="true"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232025 52622 flags.go:64] FLAG: --feature-gates=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232030 52622 flags.go:64] FLAG: --file-check-frequency="20s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232033 52622 flags.go:64] FLAG: --global-housekeeping-interval="1m0s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232037 52622 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232043 52622 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232049 52622 flags.go:64] FLAG: --healthz-port="10248"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232058 52622 flags.go:64] FLAG: --help="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232063 52622 flags.go:64] FLAG: --hostname-override="master-01"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232070 52622 flags.go:64] FLAG: --housekeeping-interval="10s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232073 52622 flags.go:64] FLAG: --http-check-frequency="20s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232077 52622 flags.go:64] FLAG: --image-credential-provider-bin-dir=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232080 52622 flags.go:64] FLAG: --image-credential-provider-config=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232083 52622 flags.go:64] FLAG: --image-gc-high-threshold="85"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232086 52622 flags.go:64] FLAG: --image-gc-low-threshold="80"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232090 52622 flags.go:64] FLAG: --image-service-endpoint=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232093 52622 flags.go:64] FLAG: --iptables-drop-bit="15"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232096 52622 flags.go:64] FLAG: --iptables-masquerade-bit="14"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232103 52622 flags.go:64] FLAG: --keep-terminated-pod-volumes="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232107 52622 flags.go:64] FLAG: --kernel-memcg-notification="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232111 52622 flags.go:64] FLAG: --kube-api-burst="100"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232114 52622 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232119 52622 flags.go:64] FLAG: --kube-api-qps="50"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232130 52622 flags.go:64] FLAG: --kube-reserved=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232134 52622 flags.go:64] FLAG: --kube-reserved-cgroup=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232142 52622 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/kubelet.kubeconfig"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232147 52622 flags.go:64] FLAG: --kubelet-cgroups=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232156 52622 flags.go:64] FLAG: --local-storage-capacity-isolation="true"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232161 52622 flags.go:64] FLAG: --lock-file=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232165 52622 flags.go:64] FLAG: --log-cadvisor-usage="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232170 52622 flags.go:64] FLAG: --log-flush-frequency="5s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232175 52622 flags.go:64] FLAG: --log-json-info-buffer-size="0"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232190 52622 flags.go:64] FLAG: --log-json-split-stream="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232194 52622 flags.go:64] FLAG: --logging-format="text"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232199 52622 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232205 52622 flags.go:64] FLAG: --make-iptables-util-chains="true"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232216 52622 flags.go:64] FLAG: --manifest-url=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232221 52622 flags.go:64] FLAG: --manifest-url-header=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232229 52622 flags.go:64] FLAG: --max-open-files="1000000"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232239 52622 flags.go:64] FLAG: --max-pods="110"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232244 52622 flags.go:64] FLAG: --maximum-dead-containers="-1"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232247 52622 flags.go:64] FLAG: --maximum-dead-containers-per-container="1"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232250 52622 flags.go:64] FLAG: --memory-manager-policy="None"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232253 52622 flags.go:64] FLAG: --minimum-container-ttl-duration="0s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232257 52622 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232260 52622 flags.go:64] FLAG: --node-ip=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232263 52622 flags.go:64] FLAG: --node-labels=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232274 52622 flags.go:64] FLAG: --node-status-max-images="50"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232277 52622 flags.go:64] FLAG: --node-status-update-frequency="10s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232281 52622 flags.go:64] FLAG: --oom-score-adj="-999"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232285 52622 flags.go:64] FLAG: --pod-cidr=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232289 52622 flags.go:64] FLAG: --pod-infra-container-image="registry.k8s.io/pause:3.9"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232292 52622 flags.go:64] FLAG: --pod-manifest-path=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232296 52622 flags.go:64] FLAG: --pod-max-pids="-1"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232301 52622 flags.go:64] FLAG: --pods-per-core="0"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232306 52622 flags.go:64] FLAG: --port="10250"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232310 52622 flags.go:64] FLAG: --protect-kernel-defaults="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232313 52622 flags.go:64] FLAG: --provider-id=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232316 52622 flags.go:64] FLAG: --qos-reserved=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232319 52622 flags.go:64] FLAG: --read-only-port="10255"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232323 52622 flags.go:64] FLAG: --register-node="true"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232326 52622 flags.go:64] FLAG: --register-schedulable="true"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232329 52622 flags.go:64] FLAG: --register-with-taints=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232333 52622 flags.go:64] FLAG: --registry-burst="10"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232336 52622 flags.go:64] FLAG: --registry-qps="5"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232340 52622 flags.go:64] FLAG: --reserved-cpus=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232344 52622 flags.go:64] FLAG: --reserved-memory=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232348 52622 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232351 52622 flags.go:64] FLAG: --root-dir="/var/lib/kubelet"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232355 52622 flags.go:64] FLAG: --rotate-certificates="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232358 52622 flags.go:64] FLAG: --rotate-server-certificates="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232362 52622 flags.go:64] FLAG: --runonce="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232366 52622 flags.go:64] FLAG: --runtime-cgroups=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232370 52622 flags.go:64] FLAG: --runtime-request-timeout="2m0s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232373 52622 flags.go:64] FLAG: --seccomp-default="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232376 52622 flags.go:64] FLAG: --serialize-image-pulls="true"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232380 52622 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232383 52622 flags.go:64] FLAG: --storage-driver-db="cadvisor"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232387 52622 flags.go:64] FLAG: --storage-driver-host="localhost:8086"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232390 52622 flags.go:64] FLAG: --storage-driver-password="root"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232394 52622 flags.go:64] FLAG: --storage-driver-secure="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232397 52622 flags.go:64] FLAG: --storage-driver-table="stats"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232401 52622 flags.go:64] FLAG: --storage-driver-user="root"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232404 52622 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232407 52622 flags.go:64] FLAG: --sync-frequency="1m0s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232411 52622 flags.go:64] FLAG: --system-cgroups=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232414 52622 flags.go:64] FLAG: --system-reserved=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232418 52622 flags.go:64] FLAG: --system-reserved-cgroup=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232422 52622 flags.go:64] FLAG: --tls-cert-file=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232425 52622 flags.go:64] FLAG: --tls-cipher-suites="[]"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232429 52622 flags.go:64] FLAG: --tls-min-version=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232432 52622 flags.go:64] FLAG: --tls-private-key-file=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232435 52622 flags.go:64] FLAG: --topology-manager-policy="none"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232440 52622 flags.go:64] FLAG: --topology-manager-policy-options=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232443 52622 flags.go:64] FLAG: --topology-manager-scope="container"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232446 52622 flags.go:64] FLAG: --v="2"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232451 52622 flags.go:64] FLAG: --version="false"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232456 52622 flags.go:64] FLAG: --vmodule=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232461 52622 flags.go:64] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232465 52622 flags.go:64] FLAG: --volume-stats-agg-period="1m0s"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.232529 52622 feature_gate.go:249] feature gates: &{map[]}
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.249830 52622 server.go:467] "Kubelet version" kubeletVersion="v1.28.1"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.249853 52622 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.249891 52622 feature_gate.go:249] feature gates: &{map[]}
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.249983 52622 feature_gate.go:249] feature gates: &{map[]}
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.271522 52622 dynamic_cafile_content.go:119] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/ssl/ca.pem"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.271720 52622 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/ssl/ca.pem"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.617759 52622 remote_runtime.go:143] "Validated CRI v1 runtime API"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.619372 52622 remote_image.go:111] "Validated CRI v1 image API"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.700400 52622 manager.go:162] cAdvisor running in container: "/system.slice/kubelet.service"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.779848 52622 fs.go:133] Filesystem UUIDs: map[b846a7df-1377-43dc-931b-e1b2bb287eb5:/dev/vda1]
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.779869 52622 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda1:{mountpoint:/ major:253 minor:1 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containerd/io.containerd.grpc.v1.cri/sandboxes/21d069a0029f0da74942888ef0a36bef56ee97849da59778a2287dad48a29ce3/shm:{mountpoint:/run/containerd/io.containerd.grpc.v1.cri/sandboxes/21d069a0029f0da74942888ef0a36bef56ee97849da59778a2287dad48a29ce3/shm major:0 minor:49 fsType:tmpfs blockSize:0} /run/containerd/io.containerd.grpc.v1.cri/sandboxes/7bb9025b67a48de86057a306e83e7c293473f61eac340f03afffb3b295052b21/shm:{mountpoint:/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7bb9025b67a48de86057a306e83e7c293473f61eac340f03afffb3b295052b21/shm major:0 minor:67 fsType:tmpfs blockSize:0} /run/user/0:{mountpoint:/run/user/0 major:0 minor:46 fsType:tmpfs blockSize:0} /sys/fs/cgroup:{mountpoint:/sys/fs/cgroup major:0 minor:25 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ab1bcfa-dbf2-41e6-ab64-05e0301da9af/volumes/kubernetes.io~projected/kube-api-access-f7rtb:{mountpoint:/var/lib/kubelet/pods/0ab1bcfa-dbf2-41e6-ab64-05e0301da9af/volumes/kubernetes.io~projected/kube-api-access-f7rtb major:0 minor:45 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3bc21a0c-4d82-4338-8f04-da9cbd26a010/volumes/kubernetes.io~projected/kube-api-access-4jqr6:{mountpoint:/var/lib/kubelet/pods/3bc21a0c-4d82-4338-8f04-da9cbd26a010/volumes/kubernetes.io~projected/kube-api-access-4jqr6 major:0 minor:48 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3bc21a0c-4d82-4338-8f04-da9cbd26a010/volumes/kubernetes.io~secret/etcd-certs:{mountpoint:/var/lib/kubelet/pods/3bc21a0c-4d82-4338-8f04-da9cbd26a010/volumes/kubernetes.io~secret/etcd-certs major:0 minor:47 fsType:tmpfs blockSize:0} overlay_0-50:{mountpoint:/run/containerd/io.containerd.runtime.v2.task/k8s.io/21d069a0029f0da74942888ef0a36bef56ee97849da59778a2287dad48a29ce3/rootfs major:0 minor:50 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/run/containerd/io.containerd.runtime.v2.task/k8s.io/314ec18bde43300d9a7fcf31c5c721c7f719fc17c5233e1ea32359193dd25cdb/rootfs major:0 minor:60 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/run/containerd/io.containerd.runtime.v2.task/k8s.io/7bb9025b67a48de86057a306e83e7c293473f61eac340f03afffb3b295052b21/rootfs major:0 minor:68 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ea884a02746e72e8550330c9ac572c81c5d16da6366becc2c46e195bfb35101/rootfs major:0 minor:79 fsType:overlay blockSize:0}]
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.784381 52622 manager.go:210] Machine: {Timestamp:2023-12-05 15:12:53.784148207 +0800 CST m=+1.868351955 CPUVendorID:GenuineIntel NumCores:4 NumPhysicalCores:4 NumSockets:1 CpuFrequency:2294608 MemoryCapacity:8106020864 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:8e422ea9be8644bc974500f967d55b8f SystemUUID:4237cfc2-5f71-47f8-b32e-6852a01de289 BootID:7a9a97d4-2083-4a47-9ae1-252394fddc77 Filesystems:[{Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:322110992384 Type:vfs Inodes:157285824 HasInodes:true} {Device:/sys/fs/cgroup DeviceMajor:0 DeviceMinor:25 Capacity:1905524736 Type:vfs Inodes:465216 HasInodes:true} {Device:/dev/vda1 DeviceMajor:253 DeviceMinor:1 Capacity:322110992384 Type:vfs Inodes:157285824 HasInodes:true} {Device:/var/lib/kubelet/pods/3bc21a0c-4d82-4338-8f04-da9cbd26a010/volumes/kubernetes.io~projected/kube-api-access-4jqr6 DeviceMajor:0 DeviceMinor:48 Capacity:5694296064 Type:vfs Inodes:989504 HasInodes:true} {Device:/run/containerd/io.containerd.grpc.v1.cri/sandboxes/21d069a0029f0da74942888ef0a36bef56ee97849da59778a2287dad48a29ce3/shm DeviceMajor:0 DeviceMinor:49 Capacity:67108864 Type:vfs Inodes:989504 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:322110992384 Type:vfs Inodes:157285824 HasInodes:true} {Device:/run/containerd/io.containerd.grpc.v1.cri/sandboxes/7bb9025b67a48de86057a306e83e7c293473f61eac340f03afffb3b295052b21/shm DeviceMajor:0 DeviceMinor:67 Capacity:67108864 Type:vfs Inodes:989504 HasInodes:true} {Device:/run/user/0 DeviceMajor:0 DeviceMinor:46 Capacity:810598400 Type:vfs Inodes:989504 HasInodes:true} {Device:/var/lib/kubelet/pods/0ab1bcfa-dbf2-41e6-ab64-05e0301da9af/volumes/kubernetes.io~projected/kube-api-access-f7rtb DeviceMajor:0 DeviceMinor:45 Capacity:5694296064 Type:vfs Inodes:989504 HasInodes:true} {Device:/var/lib/kubelet/pods/3bc21a0c-4d82-4338-8f04-da9cbd26a010/volumes/kubernetes.io~secret/etcd-certs DeviceMajor:0 DeviceMinor:47 Capacity:5694296064 Type:vfs Inodes:989504 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:322110992384 Type:vfs Inodes:157285824 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:1905524736 Type:vfs Inodes:465216 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:1905524736 Type:vfs Inodes:465216 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:322110992384 Type:vfs Inodes:157285824 HasInodes:true}] DiskMap:map[253:0:{Name:vda Major:253 Minor:0 Size:322122547200 Scheduler:mq-deadline}] NetworkDevices:[{Name:cali982da746df3 MacAddress:ee:ee:ee:ee:ee:ee Speed:10000 Mtu:1500} {Name:ens3 MacAddress:fa:18:49:7b:d8:00 Speed:-1 Mtu:1500} {Name:kube-ipvs0 MacAddress:42:83:70:f7:64:55 Speed:0 Mtu:1500} {Name:tunl0 MacAddress:00:00:00:00 Speed:0 Mtu:1480}] Topology:[{Id:0 Memory:8106020864 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:4194304 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:1 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:4194304 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:2 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:4194304 Type:Unified Level:2}] UncoreCaches:[] SocketID:0} {Id:3 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:4194304 Type:Unified Level:2}] UncoreCaches:[] SocketID:0}] Caches:[{Id:0 Size:16777216 Type:Unified Level:3}] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.784550 52622 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available.
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.787201 52622 manager.go:226] Version: {KernelVersion:4.18.0-477.27.1.el8_8.x86_64 ContainerOsVersion:Rocky Linux 8.8 (Green Obsidian) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:}
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.787320 52622 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.788339 52622 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.788576 52622 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"/podruntime.slice","SystemReservedCgroupName":"/system.slice","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"kube-reserved":{},"pods":{},"system-reserved":{}},"KubeReserved":{"cpu":"500m","memory":"1000Mi","pid":"1k"},"SystemReserved":{"cpu":"500m","memory":"1000Mi","pid":"5k"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"300Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":1024,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.788618 52622 topology_manager.go:138] "Creating topology manager with none policy"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.788629 52622 container_manager_linux.go:301] "Creating device plugin manager"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.788664 52622 manager.go:135] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.788690 52622 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.788745 52622 state_mem.go:36] "Initialized new in-memory state store"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.788797 52622 server.go:1197] "Using root directory" path="/var/lib/kubelet"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.792369 52622 kubelet.go:393] "Attempting to sync node with API server"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.792396 52622 kubelet.go:309] "Adding apiserver pod source"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.792426 52622 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.793045 52622 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.6.23" apiVersion="v1"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800550 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800582 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800590 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800597 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800611 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800622 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/git-repo"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800628 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/host-path"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800635 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/nfs"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800655 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/secret"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800667 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800675 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/cephfs"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800682 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/downward-api"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800690 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/fc"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800703 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/configmap"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800719 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/projected"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800727 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.800766 52622 plugins.go:635] "Loaded volume plugin" pluginName="kubernetes.io/csi"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.801050 52622 server.go:1232] "Started kubelet"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.801453 52622 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.802254 52622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.802704 52622 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.802974 52622 server.go:462] "Adding debug handlers to kubelet server"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.803038 52622 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.817623 52622 volume_manager.go:289] "The desired_state_of_world populator starts"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.817646 52622 volume_manager.go:291] "Starting Kubelet Volume Manager"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: E1205 15:12:53.817816 52622 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: E1205 15:12:53.817864 52622 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.834293 52622 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.852462 52622 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ab1bcfa-dbf2-41e6-ab64-05e0301da9af" volumeName="kubernetes.io/projected/0ab1bcfa-dbf2-41e6-ab64-05e0301da9af-kube-api-access-f7rtb" seLinuxMountContext=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.852517 52622 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3bc21a0c-4d82-4338-8f04-da9cbd26a010" volumeName="kubernetes.io/projected/3bc21a0c-4d82-4338-8f04-da9cbd26a010-kube-api-access-4jqr6" seLinuxMountContext=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.852529 52622 reconstruct_new.go:135] "Volume is marked as uncertain and added into the actual state" pod="" podName="3bc21a0c-4d82-4338-8f04-da9cbd26a010" volumeName="kubernetes.io/secret/3bc21a0c-4d82-4338-8f04-da9cbd26a010-etcd-certs" seLinuxMountContext=""
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.852538 52622 reconstruct_new.go:102] "Volume reconstruction finished"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.852546 52622 reconciler_new.go:29] "Reconciler: start to sync state"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.855951 52622 reconstruct_new.go:210] "DevicePaths of reconstructed volumes updated"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.918770 52622 kubelet_node_status.go:352] "Setting node annotation to enable volume controller attach/detach"
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.957073 52622 factory.go:145] Registering containerd factory
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.957199 52622 factory.go:55] Registering systemd factory
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.957267 52622 factory.go:103] Registering Raw factory
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.957319 52622 manager.go:1186] Started watching for new ooms in manager
Dec 05 15:12:53 k8s-master-0 kubelet[52622]: I1205 15:12:53.958344 52622 manager.go:299] Starting recovery of all containers
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.067110 52622 manager.go:1231] failed getting container info for "/system.slice/dnf-makecache.service": unknown container "/system.slice/dnf-makecache.service"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.223467 52622 kubelet_node_status.go:669] "Recording event message for node" node="master-01" event="NodeHasSufficientMemory"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.224126 52622 kubelet_node_status.go:669] "Recording event message for node" node="master-01" event="NodeHasNoDiskPressure"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.224137 52622 kubelet_node_status.go:669] "Recording event message for node" node="master-01" event="NodeHasSufficientPID"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.224182 52622 kubelet_node_status.go:70] "Attempting to register node" node="master-01"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.473165 52622 kubelet_node_status.go:108] "Node was previously registered" node="master-01"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.473272 52622 kubelet_node_status.go:73] "Successfully registered node" node="master-01"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.599422 52622 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.20.0.0/24"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.600285 52622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.20.0.0/24"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.601115 52622 kubelet_node_status.go:669] "Recording event message for node" node="master-01" event="NodeHasSufficientMemory"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.601145 52622 kubelet_node_status.go:669] "Recording event message for node" node="master-01" event="NodeHasNoDiskPressure"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.601156 52622 kubelet_node_status.go:669] "Recording event message for node" node="master-01" event="NodeHasSufficientPID"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.601177 52622 kubelet_node_status.go:669] "Recording event message for node" node="master-01" event="NodeNotReady"
Dec 05 15:12:54 k8s-master-0 kubelet[52622]: I1205 15:12:54.783588 52622 setters.go:552] "Node became not ready" node="master-01" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2023-12-05T07:12:54Z","lastTransitionTime":"2023-12-05T07:12:54Z","reason":"KubeletNotReady","message":"[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"}
Dec 05 15:12:55 k8s-master-0 kubelet[52622]: I1205 15:12:55.336760 52622 apiserver.go:52] "Watching apiserver"
Dec 05 15:14:52 k8s-master-0 kubelet[52622]: I1205 15:14:51.092209 52622 trace.go:236] Trace[1677399163]: "Reflector ListAndWatch" name:pkg/kubelet/config/apiserver.go:66 (05-Dec-2023 15:12:55.430) (total time: 34071ms):
Dec 05 15:14:52 k8s-master-0 kubelet[52622]: Trace[1677399163]: ---"Objects listed" error:<nil> 2641ms (15:12:58.071)
Dec 05 15:14:52 k8s-master-0 kubelet[52622]: Trace[1677399163]: ---"Objects extracted" 2778ms (15:13:01.073)
Dec 05 15:14:52 k8s-master-0 kubelet[52622]: Trace[1677399163]: ---"SyncWith done" 25898ms (15:13:26.971)
Dec 05 15:14:52 k8s-master-0 kubelet[52622]: Trace[1677399163]: ---"Resource version updated" 2530ms (15:13:29.502)
Dec 05 15:14:52 k8s-master-0 kubelet[52622]: Trace[1677399163]: [34.071874853s] [34.071874853s] END
Dec 05 15:14:53 k8s-master-0 kubelet[52622]: W1205 15:14:53.647823 52622 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
Dec 05 15:14:53 k8s-master-0 kubelet[52622]: W1205 15:14:53.912839 52622 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
Dec 05 15:15:27 k8s-master-0 systemd[1]: Stopping Kubernetes Kubelet...
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
What happened? 发生了什么问题?
集群刚启动时,一切正常,大概半小时后master三台cpu爆表,磁盘io也是,磁盘io最大和cpu消耗最高的服务是kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy 我的集群是新集群,只装了这些 [root@k8s-master-0 ~]# kubectl -n kube-system get po NAME READY STATUS RESTARTS AGE calico-kube-controllers-86b55cf789-62xk9 1/1 Running 109 (5m59s ago) 15h calico-node-4d6xf 1/1 Running 0 17h calico-node-8xmht 1/1 Running 0 17h calico-node-j5z85 1/1 Running 1 (15h ago) 17h calico-node-jhdnw 1/1 Running 0 17h coredns-7bc88ddb8b-x2qj8 1/1 Running 0 15h metrics-server-dfb478476-9x4sc 1/1 Running 1 (15h ago) 15h node-local-dns-gcj2n 1/1 Running 1 (15h ago) 17h node-local-dns-vhjnx 1/1 Running 8 17h node-local-dns-w5pvt 1/1 Running 11 (74m ago) 17h node-local-dns-x4c5q 1/1 Running 48 (99m ago) 17h
What did you expect to happen? 期望的结果是什么?
找到根因,让集群恢复正常
How can we reproduce it (as minimally and precisely as possible)? 尽可能最小化、精确地描述如何复现问题
每次重启三台master之后可以正常一段时间,但是只要一执行kubectl apply -f xxx 就会cpu爆表,然后Unable to connect to the server: net/http: TLS handshake timeout,必现。
Anything else we need to know? 其他需要说明的情况
Kubernetes version k8s 版本
Kubeasz version
OS version 操作系统版本
Related plugins (CNI, CSI, ...) and versions (if applicable) 其他网络插件等需要说明的情况