kubernetes-sigs / metrics-server

Scalable and efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/
Apache License 2.0
5.79k stars 1.87k forks source link

Metrics-Server is in CrashLoopBackOff with every new and fresh install #811

Closed fosiul closed 3 years ago

fosiul commented 3 years ago

What happened:

Every time I install Kubernetes with rke, Everything works but Metrics-Server goes into "CrashLoopBackOff". I have created at least 10 times in 2 different environment, no network issues, no iptables issues.

from google , people suggesting I need to add

Command: /metrics-server --kubelet-insecure-tls --kubelet-preferred-address-types=InternalIP

but Question is : 1) When I am using rke , why it does not add by default ( if this is the real issues) 2) what Do I need to do to add it ?

What you expected to happen: Metrics-Server should be in Running State

Anything else we need to know?:

Environment: [rke@rke19-master1 ~]$ kubectl version Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.10", GitCommit:"98d5dc5d36d34a7ee13368a7893dcb400ec4e566", GitTreeState:"clean", BuildDate:"2021-04-15T03:28:42Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.10", GitCommit:"98d5dc5d36d34a7ee13368a7893dcb400ec4e566", GitTreeState:"clean", BuildDate:"2021-04-15T03:20:25Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}

[rke@rke19-master1 ~]$ cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core) [rke@rke19-master1 ~]$ rke version INFO[0000] Running RKE version: v1.2.8 Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.10", GitCommit:"98d5dc5d36d34a7ee13368a7893dcb400ec4e566", GitTreeState:"clean", BuildDate:"2021-04-15T03:20:25Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"} [rke@rke19-master1 ~]$

[rke@rke19-master1 ~]$ cat cluster.yml nodes:

services: etcd: image: "" extra_args: {} extra_binds: [] extra_env: [] external_urls: [] ca_cert: "" cert: "" key: "" path: "" uid: 0 gid: 0 snapshot: null retention: "" creation: "" backup_config: null kube-api: image: "" extra_args: {} extra_binds: [] extra_env: [] service_cluster_ip_range: 10.43.0.0/16 service_node_port_range: "" pod_security_policy: false always_pull_images: false secrets_encryption_config: null audit_log: null admission_configuration: null event_rate_limit: null kube-controller: image: "" extra_args: node-monitor-period: '5s' node-monitor-grace-period: '20s' node-startup-grace-period: '30s' pod-eviction-timeout: '1m' concurrent-deployment-syncs: 5 concurrent-endpoint-syncs: 5 concurrent-gc-syncs: 20 concurrent-namespace-syncs: 10 concurrent-replicaset-syncs: 5 concurrent-service-syncs: 1 concurrent-serviceaccount-token-syncs: 5 deployment-controller-sync-period: 30s pvclaimbinder-sync-period: 15s extra_binds: [] extra_env: [] cluster_cidr: 10.42.0.0/16 service_cluster_ip_range: 10.43.0.0/16 scheduler: image: "" extra_args: {} extra_binds: [] extra_env: [] kubelet: image: "" extra_args: enforce-node-allocatable: 'pods' system-reserved: 'cpu=1,memory=1024Mi' kube-reserved: 'cpu=1,memory=2024Mi' eviction-hard: 'memory.available<500Mi,nodefs.available<10%,imagefs.available<15%,nodefs.inodesFree<5%' eviction-max-pod-grace-period: '30' eviction-pressure-transition-period: '30s' node-status-update-frequency: 10s global-housekeeping-interval: 1m0s housekeeping-interval: 10s runtime-request-timeout: 2m0s volume-stats-agg-period: 1m0s extra_binds: [] extra_env: [] cluster_domain: cluster.local infra_container_image: "" cluster_dns_server: 10.43.0.10 fail_swap_on: false kubeproxy: image: "" extra_args: {} extra_binds: [] extra_env: [] network: plugin: canal options: {} mtu: 0 node_selector: {} authentication: strategy: x509 sans: [] webhook: null addons: "" addons_include: [] system_images: etcd: 192.168.0.35:5000/rancher/coreos-etcd:v3.4.13-rancher1 alpine: 192.168.0.35:5000/rancher/rke-tools:v0.1.68 nginx_proxy: 192.168.0.35:5000/rancher/rke-tools:v0.1.68 cert_downloader: 192.168.0.35:5000/rancher/rke-tools:v0.1.68 kubernetes_services_sidecar: 192.168.0.35:5000/rancher/rke-tools:v0.1.68 kubedns: 192.168.0.35:5000/rancher/k8s-dns-kube-dns:1.15.10 dnsmasq: 192.168.0.35:5000/rancher/k8s-dns-dnsmasq-nanny:1.15.10 kubedns_sidecar: 192.168.0.35:5000/rancher/k8s-dns-sidecar:1.15.10 kubedns_autoscaler: 192.168.0.35:5000/rancher/cluster-proportional-autoscaler:1.8.1 coredns: 192.168.0.35:5000/rancher/coredns-coredns:1.7.0 coredns_autoscaler: 192.168.0.35:5000/rancher/cluster-proportional-autoscaler:1.8.1 nodelocal: 192.168.0.35:5000/rancher/k8s-dns-node-cache:1.15.13 kubernetes: 192.168.0.35:5000/rancher/hyperkube:v1.19.10-rancher1 flannel: 192.168.0.35:5000/rancher/coreos-flannel:v0.13.0-rancher1 flannel_cni: 192.168.0.35:5000/rancher/flannel-cni:v0.3.0-rancher6 calico_node: 192.168.0.35:5000/rancher/calico-node:v3.16.5 calico_cni: 192.168.0.35:5000/rancher/calico-cni:v3.16.5 calico_controllers: 192.168.0.35:5000/rancher/calico-kube-controllers:v3.16.5 calico_ctl: 192.168.0.35:5000/rancher/calico-ctl:v3.16.5 calico_flexvol: 192.168.0.35:5000/rancher/calico-pod2daemon-flexvol:v3.16.5 canal_node: 192.168.0.35:5000/rancher/calico-node:v3.16.5 canal_cni: 192.168.0.35:5000/rancher/calico-cni:v3.16.5 canal_controllers: 192.168.0.35:5000/rancher/calico-kube-controllers:v3.16.5 canal_flannel: 192.168.0.35:5000/rancher/coreos-flannel:v0.13.0-rancher1 canal_flexvol: 192.168.0.35:5000/rancher/calico-pod2daemon-flexvol:v3.16.5 weave_node: 192.168.0.35:5000/weaveworks/weave-kube:2.7.0 weave_cni: 192.168.0.35:5000/weaveworks/weave-npc:2.7.0 pod_infra_container: 192.168.0.35:5000/rancher/pause:3.2 ingress: 192.168.0.35:5000/rancher/nginx-ingress-controller:nginx-0.35.0-rancher2 ingress_backend: 192.168.0.35:5000/rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1 metrics_server: 192.168.0.35:5000/rancher/metrics-server:v0.3.6 windows_pod_infra_container: 192.168.0.35:5000/rancher/kubelet-pause:v0.1.4 aci_cni_deploy_container: 192.168.0.35:5000/noiro/cnideploy:5.1.1.0.1ae238a aci_host_container: 192.168.0.35:5000/noiro/aci-containers-host:5.1.1.0.1ae238a aci_opflex_container: 192.168.0.35:5000/noiro/opflex:5.1.1.0.1ae238a aci_mcast_container: 192.168.0.35:5000/noiro/opflex:5.1.1.0.1ae238a aci_ovs_container: 192.168.0.35:5000/noiro/openvswitch:5.1.1.0.1ae238a aci_controller_container: 192.168.0.35:5000/noiro/aci-containers-controller:5.1.1.0.1ae238a aci_gbp_server_container: 192.168.0.35:5000/noiro/gbp-server:5.1.1.0.1ae238a aci_opflex_server_container: 192.168.0.35:5000/noiro/opflex-server:5.1.1.0.1ae238a ssh_key_path: ~/.ssh/id_rsa ssh_cert_path: "" ssh_agent_auth: false authorization: mode: rbac options: {} ignore_docker_version: false kubernetes_version: "" private_registries:

spoiler for Metrics Server manifest: kubectl describe pods metrics-server-5b6d79d4f4-ggl57 -n kube-system ``` Name: metrics-server-5b6d79d4f4-ggl57 Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Node: 192.168.0.59/192.168.0.59 Start Time: Tue, 17 Aug 2021 00:00:43 +0100 Labels: k8s-app=metrics-server pod-template-hash=5b6d79d4f4 Annotations: cni.projectcalico.org/podIP: 10.42.4.3/32 cni.projectcalico.org/podIPs: 10.42.4.3/32 Status: Running IP: 10.42.4.3 IPs: IP: 10.42.4.3 Controlled By: ReplicaSet/metrics-server-5b6d79d4f4 Containers: metrics-server: Container ID: docker://3d3fdc3746637ffff60afd26f711a44a5ead8d4e4156741e4267b207fc04b08a Image: 192.168.0.35:5000/rancher/metrics-server:v0.3.6 Image ID: docker-pullable://192.168.0.35:5000/rancher/metrics-server@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b Port: 4443/TCP Host Port: 0/TCP Args: --cert-dir=/tmp --secure-port=4443 --kubelet-insecure-tls --kubelet-preferred-address-types=InternalIP --logtostderr State: Running Started: Tue, 17 Aug 2021 07:31:36 +0100 Last State: Terminated Reason: Error Exit Code: 2 Started: Tue, 17 Aug 2021 07:25:58 +0100 Finished: Tue, 17 Aug 2021 07:26:27 +0100 Ready: False Restart Count: 152 Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get https://:https/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: Mounts: /tmp from tmp-dir (rw) /var/run/secrets/kubernetes.io/serviceaccount from metrics-server-token-78b6h (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: tmp-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: metrics-server-token-78b6h: Type: Secret (a volume populated by a Secret) SecretName: metrics-server-token-78b6h Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: :NoExecuteop=Exists :NoScheduleop=Exists Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 30m (x142 over 7h30m) kubelet Container image "192.168.0.35:5000/rancher/metrics-server:v0.3.6" already present on machine Warning Unhealthy 5m41s (x449 over 7h30m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 404 Warning BackOff 37s (x1816 over 7h28m) kubelet Back-off restarting failed container [rke@rke19-master1 ~]$ kubectl get pods metrics-server-5b6d79d4f4-ggl57 -n kube-system -o yaml apiVersion: v1 kind: Pod metadata: annotations: cni.projectcalico.org/podIP: 10.42.4.3/32 cni.projectcalico.org/podIPs: 10.42.4.3/32 creationTimestamp: "2021-08-16T23:00:42Z" generateName: metrics-server-5b6d79d4f4- labels: k8s-app: metrics-server pod-template-hash: 5b6d79d4f4 managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:generateName: {} f:labels: .: {} f:k8s-app: {} f:pod-template-hash: {} f:ownerReferences: .: {} k:{"uid":"fb15b257-4a9d-478b-b461-8b61c165e3db"}: .: {} f:apiVersion: {} f:blockOwnerDeletion: {} f:controller: {} f:kind: {} f:name: {} f:uid: {} f:spec: f:affinity: .: {} f:nodeAffinity: .: {} f:requiredDuringSchedulingIgnoredDuringExecution: .: {} f:nodeSelectorTerms: {} f:containers: k:{"name":"metrics-server"}: .: {} f:args: {} f:image: {} f:imagePullPolicy: {} f:livenessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:name: {} f:ports: .: {} k:{"containerPort":4443,"protocol":"TCP"}: .: {} f:containerPort: {} f:name: {} f:protocol: {} f:readinessProbe: .: {} f:failureThreshold: {} f:httpGet: .: {} f:path: {} f:port: {} f:scheme: {} f:periodSeconds: {} f:successThreshold: {} f:timeoutSeconds: {} f:resources: {} f:securityContext: .: {} f:readOnlyRootFilesystem: {} f:runAsNonRoot: {} f:runAsUser: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeMounts: .: {} k:{"mountPath":"/tmp"}: .: {} f:mountPath: {} f:name: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:priorityClassName: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:serviceAccount: {} f:serviceAccountName: {} f:terminationGracePeriodSeconds: {} f:tolerations: {} f:volumes: .: {} k:{"name":"tmp-dir"}: .: {} f:emptyDir: {} f:name: {} manager: kube-controller-manager operation: Update time: "2021-08-16T23:00:42Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:cni.projectcalico.org/podIP: {} f:cni.projectcalico.org/podIPs: {} manager: calico operation: Update time: "2021-08-16T23:00:47Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.42.4.3"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update time: "2021-08-16T23:00:54Z" name: metrics-server-5b6d79d4f4-ggl57 namespace: kube-system ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: metrics-server-5b6d79d4f4 uid: fb15b257-4a9d-478b-b461-8b61c165e3db resourceVersion: "81770" selfLink: /api/v1/namespaces/kube-system/pods/metrics-server-5b6d79d4f4-ggl57 uid: af8d4e07-aa3f-4efe-8169-feb37cfd97df spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: NotIn values: - windows - key: node-role.kubernetes.io/worker operator: Exists containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP - --logtostderr image: 192.168.0.35:5000/rancher/metrics-server:v0.3.6 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: {} securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /tmp name: tmp-dir - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: metrics-server-token-78b6h readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: 192.168.0.59 preemptionPolicy: PreemptLowerPriority priority: 2000000000 priorityClassName: system-cluster-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: metrics-server serviceAccountName: metrics-server terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute operator: Exists - effect: NoSchedule operator: Exists volumes: - emptyDir: {} name: tmp-dir - name: metrics-server-token-78b6h secret: defaultMode: 420 secretName: metrics-server-token-78b6h status: conditions: - lastProbeTime: null lastTransitionTime: "2021-08-16T23:00:43Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2021-08-16T23:00:43Z" message: 'containers with unready status: [metrics-server]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2021-08-16T23:00:43Z" message: 'containers with unready status: [metrics-server]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2021-08-16T23:00:43Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://33a6b92177c9dd3dcb72736b722dba38d76f7b9e94f5ac5785bd8fedcae77a99 image: 192.168.0.35:5000/rancher/metrics-server:v0.3.6 imageID: docker-pullable://192.168.0.35:5000/rancher/metrics-server@sha256:c9c4e95068b51d6b33a9dccc61875df07dc650abbf4ac1a19d58b4628f89288b lastState: terminated: containerID: docker://11d5db2ff08dbd3054b0fd383d2a4af9abcf3b9623dfc6f12b2a5da0dd3f59b1 exitCode: 2 finishedAt: "2021-08-17T06:44:27Z" reason: Error startedAt: "2021-08-17T06:43:58Z" name: metrics-server ready: false restartCount: 158 started: true state: running: startedAt: "2021-08-17T06:49:29Z" hostIP: 192.168.0.59 phase: Running podIP: 10.42.4.3 podIPs: - ip: 10.42.4.3 qosClass: BestEffort startTime: "2021-08-16T23:00:43Z" ```
spoiler for Kubelet config: [rke@rke19-master1 ~]$ cat kube_config_cluster.yml apiVersion: v1 kind: Config clusters: - cluster: api-version: v1 certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN3akNDQWFxZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKTFdOaE1CNFhEVEl4TURneE5qSXlORGN6T1ZvWERUTXhNRGd4TkRJeU5EY3pPVm93RWpFUU1BNEdBMVVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1oMFdYeFNoVmtjCjJEaEl2T1lNdm16UklFSGtxRGh2dnJJeEk2S1RoNGdvRkVtZUE4TnMzMTJoU1g4SkxOK2huSlI1UGhvT1g5eTcKUzczTUhlR2l4MktaL1lodFBzRmdxTFQ2NjE3T1RwcHRuZFhvQXlTRWROODV0MDg2MVhCNnRNdHhpc3QrVWtBdQpZWkNQWmtibGNYcHJRWEZHT044WklteHQ2TWltdyswOTFkd1FNMWh4MmgwdzljcExzaXVPS1VHWEh1NDNITXpqClFLZlZJZGRZZjJudmxCdzV1a3AyYlREOWp0bUdkY0I4c0RvQnE0aU9FQzd3cVhqQ25OZ2ZVRlFUc3oyMnZWaTMKd21iZ3VaWFkvTlRTdzM5aFRuanFhMHpRZG5zOHJ3NkxiVGo0My9EN3EyMVZMNFdxMGZXOUpubTl3cDJYVlRSYgo4OVk3ZndNdmhoOENBd0VBQWFNak1DRXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSjZKeUd1WkpzdU03SXA5ZDh4cGY4RjJoeDBUYllrcTVhd20KL09PY0FPbFFudUlXM2lwb1YvdFhqUVFNTTNUdzdrNk9PcXlGSFo1bGdwWkFkVjBmN0Rla3NYaVoxOEprUDRobApiZnZsWEtrdWVkaGlnQnhGM2VFbitXcmRqWlBneFJKUDVXNzVRZFhaaXdDMFpsYktGWG9BNW96b1lKNUVqZnYyCktZek95MHgrM25acU5yT3BxU3JaSndLVWNhZUMyNVJQY2hLSkptK2JXUFVFVE1XS25PQVRvUUVac0QzS0cxc1MKbDg0QWJTbitBSUZpTU5NNkpoVThDcDNBQXQvRUhPQ1ZHYXZ4SzYrRkI2M3dVNXc4Q1YrYm9rT1Z4RGw3U3IvZwpiSkF3ZkJFK05NNS9KaTFKTWxPWGZpR0ZxczN6KzM5VWxQWHpvL0FZYTc4UnRhMENUdHc9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: "https://192.168.0.58:6443" name: "local" contexts: - context: cluster: "local" user: "kube-admin-local" name: "local" current-context: "local" users: - name: "kube-admin-local" user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lJTFFDOWZROVM1Mm93RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSGEzVmlaUzFqWVRBZUZ3MHlNVEE0TVRZeU1qUTNNemxhRncwek1UQTRNVFF5TWpRM05ESmFNQzR4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVJNd0VRWURWUVFERXdwcmRXSmxMV0ZrYldsdU1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXF1OGovUHo3akRVMjZXc1doaDRBNjk0OHR2SkUKNTAyT0pVYzQ3YThYeElsWWlNSkpwV0NYcHRSNnM0Uy9DdnMxM3pwYTk2S00vRk1yQ3JmTjJNem95VGh4eVlFZwpaL3NpYW9sMUZ2bk15VmZVM0xnaTc2d3F5VjVOY0VSUUx3Vnl0bGwzNEJSNC96YUowcmIrcHh4em84ZjRwckVqCm56RFBhZlJ3OXI2dUtqaWRrb0JFNTlIRmZzdjVhZyt2UDVleDRvYnBRT3gzYVN0TnpBcFBuZzltU041RG1LbGoKblZtVjNhT2VEZjVXZ05JY2JvWHRWUEt4cDFkWWJPOHI2dnZrNHFDdTc3bUtwc3FKWXJiMCtFdlZkdDFSb24zaQp6UTBTcDlhTEFzdThjN0xrY2hMNnBra2FlbVNMb29jTkpBMEVqK2ZnVXFVK29qRElRdTd3VkRRV0VRSURBUUFCCm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFGdkNrN3FVUlhtRVVSQ2wyVUh0dmFacXhWM2I0blJvRXp3b2F5S0E0emFuS1V4SQpkOHl2MHpoaFA4ckM5cUhDZFo0eUNoaml1MHU3OUNLczNVcmhXSDY1MTdmWmxVTFZXNlNyM2xLcVdtSU9JcjAzCnp2OUpjdlJEYlJma3hWR1M1REtkYURUU25FU0JCVHNUNlpaTjYvVFBOdkg0a3pvSUh3cFFsQlVpTkZZWEFUbFQKM3FETjR4dEJ1Mk9oRU1lcEhBT00wbDRvWlhINVJPQXI1Q1MrVFU2eTREcmdIZjRDSElZN1MrS2RSdXFhb3MyZApUTFV6VXo3TkV1dFE2eVR4V2htR1N2NjRjN3U2QVpXQ1J3Y3VwVnRTcG04L0FUOEVTK2hPYnJYWElWRDh0VXl1CmpLdzd3TnVUc29yaGhQeldIQmFibVJSWW1HYlp3dzJsNFRGSFBWMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcGdJQkFBS0NBUUVBcXU4ai9QejdqRFUyNldzV2hoNEE2OTQ4dHZKRTUwMk9KVWM0N2E4WHhJbFlpTUpKCnBXQ1hwdFI2czRTL0N2czEzenBhOTZLTS9GTXJDcmZOMk16b3lUaHh5WUVnWi9zaWFvbDFGdm5NeVZmVTNMZ2kKNzZ3cXlWNU5jRVJRTHdWeXRsbDM0QlI0L3phSjByYitweHh6bzhmNHByRWpuekRQYWZSdzlyNnVLamlka29CRQo1OUhGZnN2NWFnK3ZQNWV4NG9icFFPeDNhU3ROekFwUG5nOW1TTjVEbUtsam5WbVYzYU9lRGY1V2dOSWNib1h0ClZQS3hwMWRZYk84cjZ2dms0cUN1NzdtS3BzcUpZcmIwK0V2VmR0MVJvbjNpelEwU3A5YUxBc3U4YzdMa2NoTDYKcGtrYWVtU0xvb2NOSkEwRWorZmdVcVUrb2pESVF1N3dWRFFXRVFJREFRQUJBb0lCQVFDYmx3M0ZER25VRitRaAoxODRxeWtqQWFnd040cnlCWm9ES3dlZTV3alQ2T3FLUjZYZXJ4eDZEUnNsaGVxV0MwMk1ZREVBZFJLTGNVci9OCkE3MmxaKzlFcWRJNVB3WkdYN3ZXQ2NUQTR5UmE2VTNpa3VHS0U4Ym1nS1l3V0o0OER0TjUxRHBmaDRNVG00c2MKZUdHWHJ6ZzdqcHh3N3JDa0NJUGp5QkxESnBIVjd6RXZYbXJSN0Z0UHFEek94WkZtNEFwd29MRTZUR2YyWWxTZgptbFFaUkhIL1Jha3RGQUNRNjZkNjJDQkNmZm45cXVjS2VPRjdIV29CV0JUazQwQ2tRcHZYV29UTG9GWXYzK0h1CmtpbTR6Ly9BU2V6MnhyRVhSUFpaekZLcHBwOHVxV25qbkl3QmxmQVB0L3F1QS9QNWV1R1RKcUowZk5icHJMcHkKYUw2UTRHeWhBb0dCQU1RUkI4RGIxVHp2Qmxpam8xU3NmejUrUTNwRzN6ZUNHcG5EYWZ6WWE5QnFCMHlKVEZxcQppUE02L0NxQ1VhS1VlQi9CRmN3VGJkUTY5WFhzL1dyWU5mWXZqc1Jjc3BXNzhOVVNJdnNSYU4vK09Ya1FQSmtOCkUxVFRxUitxa3hKbWdZUTdGM1h6V05BWUJSSnMwTHhVbHVia1hENkRiQ0FDNTIwOHcxb2lZNFpsQW9HQkFOOHYKWjBvTkd4TU4xcWVFVVJPcmdla3dKdmNDcFEyUG1LMDZVUmFEZzdxb1hiQVRyTzhjTjdScXRvSnM4blFSaTYvbwpJdVNyZGlxVWFZeFhXTkdkVWdWWmlFby9HWFFZeVJBbUtVcFdBbmJKbURwZzBjV2dRd1BHamJkM1FGZWlPeXFlCmcraml3UG1XUmV1eU9IVUdGbURtTlVwZkV3Z1hEN3JkR0toSjc5QTlBb0dCQUk5NXZ1aThkZUN2TVQrd0Q1ZW8KMnp5Si9Tci9yZHphMGtodkhhSXZaVVlRTU9NckhickRUSkJoTzZLSDF1RllNRWRjYm16MlVzcVprb0lIT0xMMQpJUmZVV1c4TVBvc2dDdTZBNVNSQTZ6UHV2M1ArRTdvVVBXODNyRzFGejNZSm1RR0FsSHgxNVNueVNkUGYyU2ZYCjVzMXprcVVVV3cxWjBxeTNhR1VQQVRHWkFvR0JBTUEwbXNkek1mWGUzUlczSmZ2Q29FYXFhV1FncXZSYXppbWgKSjJRMExxWDVpWFd4L0NTUU1JajN2ZVhrM1lpSDg3eXlOaHFvYjBPTVBMbllIMjJtQnBVRTNoTFM5S0MvRjZrSQp0RmFJYStiUkJvQ0FFU2daTkoxenlXaFBFdUpsbkg2L3RPcERIZDNVUkxNTzhRQVhGZjZ0UXdlaGlVcFdVZjJqCm16Q1RQQ3doQW9HQkFMVEVYYUhad1JhK2JCa2lhN3lrUVN1QzI3elM3ZEFRYnMrMXVzbnp4a0ExQUNTcFQwdm0KY2lqanRnR0JWQkZaeTBRazlONERkUU1oOFBvTXJ2NnJUWnZ6NHNva3c1VG9HdGdjcjJZdHlBMDVrTDBUYzhPKwpNbmIxOFZBb1pGV0V3R2xJZHJrR3BqWW9SYzNTQ0xwNFJZMlpmeFFFQkFlalZRQng5aUNlWlN5bAotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=[rke@rke19-master1 ~]$
spoiler for Metrics Server logs: [rke@rke19-master1 ~]$ kubectl logs metrics-server-5b6d79d4f4-ggl57 -n kube-system I0817 06:56:00.095688 1 secure_serving.go:116] Serving securely on [::]:4443

/kind bug

fosiul commented 3 years ago

this issue was resolved by changing

Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get https://:https/readyz delay=0s timeout=1s period=10s #success=1 #failure=3 to :

Liveness: http-get https://:https/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get https://:https/healthz delay=0s timeout=1s period=10s #success=1 #failure=3

but when using metrics-server:v0.3.6 , should it use /healthz by default ? is there any reason that I need to change manually?

yangjunmyfm192085 commented 3 years ago

The official manifests file is here https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml I think this is a bit different from what you use.

yangjunmyfm192085 commented 3 years ago

@fosiul Is it solved using official manifests?

serathius commented 3 years ago

MS 0.3.x is no longer supported.