k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
27.69k stars 2.32k forks source link

Unable to get pod metrics #1067

Closed wglambert closed 3 years ago

wglambert commented 4 years ago

Version:

$ k3s -v                                                                                                                                            
k3s version v1.0.0-rc1 (670d4b41)
k3s install ```console $ curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.0.0-rc1 INSTALL_K3S_EXEC="server" sh -s - --docker --kube-apiserver-arg=enable-admission-plugins=LimitRanger [INFO] Using v1.0.0-rc1 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.0.0-rc1/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.0.0-rc1/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Skipping /usr/local/bin/kubectl symlink to k3s, command exists in PATH at /usr/bin/kubectl [INFO] Skipping /usr/local/bin/crictl symlink to k3s, command exists in PATH at /usr/bin/crictl [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s $ export KUBECONFIG=$(locate k3s.yaml) $ sudo chmod 777 $KUBECONFIG ```

Describe the bug Pods won't display metrics with kube-metrics installed from https://github.com/kubernetes-sigs/metrics-server Horizontal Pod Autoscaling also doesn't resolve metrics, however kubectl top nodes resolves fine

To Reproduce

Install and configure kube-metrics ```console $ git clone https://github.com/kubernetes-incubator/metrics-server Cloning into 'metrics-server'... remote: Enumerating objects: 4, done. remote: Counting objects: 100% (4/4), done. remote: Compressing objects: 100% (4/4), done. remote: Total 11349 (delta 0), reused 1 (delta 0), pack-reused 11345 Receiving objects: 100% (11349/11349), 12.18 MiB | 7.15 MiB/s, done. Resolving deltas: 100% (5912/5912), done. $ kubectl apply -f metrics-server/deploy/1.8+/ clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created serviceaccount/metrics-server created deployment.apps/metrics-server created service/metrics-server created clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created $ kubectl edit deploy -n kube-system metrics-server deployment.apps/metrics-server edited $ kubectl get deploy -n kube-system metrics-server -o json | jq .spec.template.spec.containers[].args [ "--cert-dir=/tmp", "--secure-port=4443", "--kubelet-insecure-tls=true", "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname" ] ```
Verify functionality ```console $ kubectl get apiservices NAME SERVICE AVAILABLE AGE v1. Local True 3m50s v1.admissionregistration.k8s.io Local True 3m50s v1beta1.admissionregistration.k8s.io Local True 3m50s v1beta1.apiextensions.k8s.io Local True 3m50s v1.apiextensions.k8s.io Local True 3m50s v1.apps Local True 3m50s v1beta1.authentication.k8s.io Local True 3m50s v1.authentication.k8s.io Local True 3m50s v1.authorization.k8s.io Local True 3m50s v2beta1.autoscaling Local True 3m50s v1beta1.authorization.k8s.io Local True 3m50s v2beta2.autoscaling Local True 3m50s v1.batch Local True 3m50s v1.autoscaling Local True 3m50s v1beta1.batch Local True 3m50s v1beta1.certificates.k8s.io Local True 3m50s v1.coordination.k8s.io Local True 3m50s v1beta1.coordination.k8s.io Local True 3m50s v1beta1.events.k8s.io Local True 3m50s v1.networking.k8s.io Local True 3m50s v1beta1.extensions Local True 3m50s v1beta1.networking.k8s.io Local True 3m50s v1beta1.policy Local True 3m50s v1.rbac.authorization.k8s.io Local True 3m50s v1beta1.node.k8s.io Local True 3m50s v1beta1.scheduling.k8s.io Local True 3m50s v1beta1.rbac.authorization.k8s.io Local True 3m50s v1.scheduling.k8s.io Local True 3m50s v1.storage.k8s.io Local True 3m50s v1beta1.storage.k8s.io Local True 3m50s v1.k3s.cattle.io Local True 3m24s v1.helm.cattle.io Local True 3m24s v1beta1.metrics.k8s.io kube-system/metrics-server True 93s $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE local-path-provisioner-58fb86bdfd-6czp5 1/1 Running 0 3m46s coredns-d798c9dd-x782x 1/1 Running 0 3m46s helm-install-traefik-fsj6q 0/1 Completed 0 3m46s traefik-65bccdc4bd-pcmbb 1/1 Running 0 2m20s svclb-traefik-fbtj7 3/3 Running 0 2m20s metrics-server-b5655b66c-gjt75 1/1 Running 0 76s $ kubectl top no NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ayanami 765m 9% 9449Mi 80% ```

Using https://www.digitalocean.com/community/tutorials/how-to-autoscale-your-workloads-on-digitalocean-kubernetes as a reference

Create a deployment with resource limits ```console $ kubectl create deployment nginx --image=nginx deployment.apps/nginx created $ kubectl edit deploy nginx deployment.apps/nginx edited $ kubectl get deploy nginx -o json | jq .spec.template.spec.containers[].resources { "limits": { "cpu": "300m" }, "requests": { "cpu": "100m", "memory": "250Mi" } } ```

Expected behavior For pod metrics to be displayed, the node metrics work fine

Actual behavior Some error snippets:

the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API
---
horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
Create an HPA and test pod metrics ```console $ kubectl autoscale deploy nginx --min=1 --max=5 --cpu-percent=50 horizontalpodautoscaler.autoscaling/nginx autoscaled $ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx Deployment/nginx /50% 1 5 1 2m32s $ kubectl top po W1113 16:18:41.392771 7082 top_pod.go:266] Metrics not available for pod default/nginx-7bfff5fd9f-rbklh, age: 2m20.39276264s error: Metrics not available for pod default/nginx-7bfff5fd9f-rbklh, age: 2m20.39276264s $ kubectl describe hpa nginx Name: nginx Namespace: default Labels: Annotations: CreationTimestamp: Wed, 13 Nov 2019 16:18:33 -0800 Reference: Deployment/nginx Metrics: ( current / target ) resource cpu on pods (as a percentage of request): / 50% Min replicas: 1 Max replicas: 5 Deployment pods: 1 current / 0 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetResourceMetric 4s (x10 over 2m20s) horizontal-pod-autoscaler unable to get metrics for resource cpu: no metrics returned from resource metrics API Warning FailedComputeMetricsReplicas 4s (x10 over 2m20s) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API $ journalctl -eu k3s | tail -n 2 Nov 13 16:20:05 Ayanami k3s[29018]: I1113 16:20:05.194601 29018 event.go:255] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"default", Name:"nginx", UID:"ee5c9d7b-9251-4563-bed8-ebd5103ea906", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1054", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: no metrics returned from resource metrics API Nov 13 16:20:05 Ayanami k3s[29018]: I1113 16:20:05.194669 29018 event.go:255] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"default", Name:"nginx", UID:"ee5c9d7b-9251-4563-bed8-ebd5103ea906", APIVersion:"autoscaling/v2beta2", ResourceVersion:"1054", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API ```

Also following https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#create-horizontal-pod-autoscaler

Walkthrough autoscale ```console $ kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. service/php-apache created deployment.apps/php-apache created $ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 horizontalpodautoscaler.autoscaling/php-apache autoscaled $ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx Deployment/nginx /50% 1 5 1 32m php-apache Deployment/php-apache /50% 1 10 1 23s $ kubectl top no NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ayanami 867m 10% 9209Mi 78% $ kubectl top po W1113 16:53:11.127447 28186 top_pod.go:266] Metrics not available for pod default/nginx-7bfff5fd9f-rbklh, age: 36m50.127437786s error: Metrics not available for pod default/nginx-7bfff5fd9f-rbklh, age: 36m50.127437786s $ kubectl logs -n kube-system metrics-server-b5655b66c-gjt75 | tail -n 4 E1114 00:58:47.254980 1 reststorage.go:160] unable to fetch pod metrics for pod default/nginx-7bfff5fd9f-rbklh: no metrics known for pod E1114 00:58:47.258093 1 reststorage.go:160] unable to fetch pod metrics for pod default/php-apache-79544c9bd9-f6m8t: no metrics known for pod E1114 00:59:02.267962 1 reststorage.go:160] unable to fetch pod metrics for pod default/nginx-7bfff5fd9f-rbklh: no metrics known for pod E1114 00:59:02.279683 1 reststorage.go:160] unable to fetch pod metrics for pod default/php-apache-79544c9bd9-f6m8t: no metrics known for pod ```
erikwilson commented 4 years ago

Metrics server is already included with v1.0.0-rc1, if kubectl top nodes works that is a good indication that it is working.

Looks like you need to include a namespace with kubectl top pod, eg kubectl top pod -A.

wglambert commented 4 years ago
$ kubectl top pod -A
W1114 08:58:41.766611    3134 top_pod.go:266] Metrics not available for pod default/nginx-7bfff5fd9f-rbklh, age: 16h42m20.766604134s
error: Metrics not available for pod default/nginx-7bfff5fd9f-rbklh, age: 16h42m20.766604134s
wglambert commented 4 years ago

On v0.10.2 in a virtualbox vm

Setup ```console $ curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v0.10.2 INSTALL_K3S_EXEC="server" sh -s - --docker --kube-apiserver-arg=enable-admission-plugins=LimitRanger [INFO] Using v0.10.2 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v0.10.2/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v0.10.2/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Skipping /usr/local/bin/kubectl symlink to k3s, command exists in PATH at /usr/bin/kubectl [INFO] Skipping /usr/local/bin/crictl symlink to k3s, command exists in PATH at /usr/bin/crictl [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s $ export KUBECONFIG=$(locate k3s.yaml) $ sudo chmod 777 $KUBECONFIG $ git clone https://github.com/kubernetes-incubator/metrics-server Cloning into 'metrics-server'... remote: Enumerating objects: 4, done. remote: Counting objects: 100% (4/4), done. remote: Compressing objects: 100% (4/4), done. remote: Total 11349 (delta 0), reused 1 (delta 0), pack-reused 11345 Receiving objects: 100% (11349/11349), 12.18 MiB | 1.30 MiB/s, done. Resolving deltas: 100% (5913/5913), done. $ kubectl apply -f metrics-server/deploy/1.8+/ clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created serviceaccount/metrics-server created deployment.apps/metrics-server created service/metrics-server created clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created $ kubectl edit deploy -n kube-system metrics-server deployment.apps/metrics-server edited $ kubectl get deploy -n kube-system metrics-server -o json | jq .spec.template.spec.containers[].args [ "--cert-dir=/tmp", "--secure-port=4443", "--kubelet-insecure-tls=true", "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname" ] $ kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. service/php-apache created deployment.apps/php-apache created $ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 horizontalpodautoscaler.autoscaling/php-apache autoscaled ```
$ k3s -v
k3s version v0.10.2 (8833bfd9)

$ kubectl top no
NAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k3s-1   235m         5%     1323Mi          19%

$ kubectl top po -A
W1114 09:28:44.143327   20025 top_pod.go:259] Metrics not available for pod default/php-apache-79544c9bd9-lqvp7, age: 6m1.143318282s
error: Metrics not available for pod default/php-apache-79544c9bd9-lqvp7, age: 6m1.143318282s
Probing ```console $ kubectl get apiservices NAME SERVICE AVAILABLE AGE v1beta1.apiextensions.k8s.io Local True 8m42s v1. Local True 8m42s v1.apiextensions.k8s.io Local True 8m42s v1beta1.admissionregistration.k8s.io Local True 8m42s v1.apps Local True 8m42s v1.admissionregistration.k8s.io Local True 8m42s v1.authentication.k8s.io Local True 8m42s v1beta1.authentication.k8s.io Local True 8m42s v1.authorization.k8s.io Local True 8m42s v1beta1.authorization.k8s.io Local True 8m42s v1.autoscaling Local True 8m42s v2beta1.autoscaling Local True 8m42s v2beta2.autoscaling Local True 8m42s v1beta1.coordination.k8s.io Local True 8m42s v1.batch Local True 8m42s v1beta1.batch Local True 8m42s v1beta1.certificates.k8s.io Local True 8m42s v1.coordination.k8s.io Local True 8m42s v1.networking.k8s.io Local True 8m42s v1beta1.extensions Local True 8m42s v1beta1.events.k8s.io Local True 8m42s v1beta1.networking.k8s.io Local True 8m42s v1beta1.node.k8s.io Local True 8m42s v1.rbac.authorization.k8s.io Local True 8m42s v1beta1.policy Local True 8m42s v1beta1.rbac.authorization.k8s.io Local True 8m42s v1.scheduling.k8s.io Local True 8m42s v1beta1.storage.k8s.io Local True 8m42s v1beta1.scheduling.k8s.io Local True 8m42s v1.k3s.cattle.io Local True 8m42s v1.helm.cattle.io Local True 8m42s v1.storage.k8s.io Local True 8m42s v1beta1.metrics.k8s.io kube-system/metrics-server True 7m51s $ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache /50% 1 10 1 6m43s $ kubectl describe hpa Name: php-apache Namespace: default Labels: Annotations: CreationTimestamp: Thu, 14 Nov 2019 09:22:47 -0800 Reference: Deployment/php-apache Metrics: ( current / target ) resource cpu on pods (as a percentage of request): / 50% Min replicas: 1 Max replicas: 10 Deployment pods: 1 current / 0 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedComputeMetricsReplicas 3m49s (x12 over 6m36s) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API Warning FailedGetResourceMetric 92s (x21 over 6m36s) horizontal-pod-autoscaler unable to get metrics for resource cpu: no metrics returned from resource metrics API $ sudo journalctl -eu k3s | tail -n 2 Nov 14 09:29:53 k3s-1 k3s[8290]: I1114 09:29:53.043650 8290 event.go:255] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"default", Name:"php-apache", UID:"07bcb578-af57-4fea-b245-9ff46765c9a6", APIVersion:"autoscaling/v2beta2", ResourceVersion:"628", FieldPath:""}): type: 'Warning' reason: 'FailedGetResourceMetric' unable to get metrics for resource cpu: no metrics returned from resource metrics API Nov 14 09:29:53 k3s-1 k3s[8290]: I1114 09:29:53.043747 8290 event.go:255] Event(v1.ObjectReference{Kind:"HorizontalPodAutoscaler", Namespace:"default", Name:"php-apache", UID:"07bcb578-af57-4fea-b245-9ff46765c9a6", APIVersion:"autoscaling/v2beta2", ResourceVersion:"628", FieldPath:""}): type: 'Warning' reason: 'FailedComputeMetricsReplicas' invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API $ kubectl top po -A --v=10 I1114 09:30:41.849923 22021 loader.go:359] Config loaded from file /etc/rancher/k3s/k3s.yaml I1114 09:30:41.850708 22021 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.14.1 (linux/amd64) kubernetes/b739410" -H "Authorization: Basic YWRtaW46MmQzYTY4YTI4NTE4ZDQ5MGQ1NWI5MTQxNWYwNjEyZGU=" 'https://127.0.0.1:6443/api?timeout=32s' I1114 09:30:41.861103 22021 round_trippers.go:438] GET https://127.0.0.1:6443/api?timeout=32s 200 OK in 10 milliseconds I1114 09:30:41.861172 22021 round_trippers.go:444] Response Headers: I1114 09:30:41.861180 22021 round_trippers.go:447] Content-Type: application/json I1114 09:30:41.861205 22021 round_trippers.go:447] Date: Thu, 14 Nov 2019 17:30:41 GMT I1114 09:30:41.861216 22021 round_trippers.go:447] Content-Length: 136 I1114 09:30:41.861226 22021 round_trippers.go:447] Cache-Control: no-cache, private I1114 09:30:41.861312 22021 request.go:942] Response Body: {"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"0.0.0.0/0","serverAddress":"192.168.1.111:6443"}]} I1114 09:30:41.861835 22021 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.14.1 (linux/amd64) kubernetes/b739410" -H "Authorization: Basic YWRtaW46MmQzYTY4YTI4NTE4ZDQ5MGQ1NWI5MTQxNWYwNjEyZGU=" 'https://127.0.0.1:6443/apis?timeout=32s' I1114 09:30:41.862710 22021 round_trippers.go:438] GET https://127.0.0.1:6443/apis?timeout=32s 200 OK in 0 milliseconds I1114 09:30:41.862732 22021 round_trippers.go:444] Response Headers: I1114 09:30:41.862743 22021 round_trippers.go:447] Cache-Control: no-cache, private I1114 09:30:41.862753 22021 round_trippers.go:447] Content-Type: application/json I1114 09:30:41.862772 22021 round_trippers.go:447] Date: Thu, 14 Nov 2019 17:30:41 GMT I1114 09:30:41.863205 22021 request.go:942] Response Body: {"kind":"APIGroupList","apiVersion":"v1","groups":[{"name":"apiregistration.k8s.io","versions":[{"groupVersion":"apiregistration.k8s.io/v1","version":"v1"},{"groupVersion":"apiregistration.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"apiregistration.k8s.io/v1","version":"v1"}},{"name":"extensions","versions":[{"groupVersion":"extensions/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"extensions/v1beta1","version":"v1beta1"}},{"name":"apps","versions":[{"groupVersion":"apps/v1","version":"v1"}],"preferredVersion":{"groupVersion":"apps/v1","version":"v1"}},{"name":"events.k8s.io","versions":[{"groupVersion":"events.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"events.k8s.io/v1beta1","version":"v1beta1"}},{"name":"authentication.k8s.io","versions":[{"groupVersion":"authentication.k8s.io/v1","version":"v1"},{"groupVersion":"authentication.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"authentication.k8s.io/v1","version":"v1"}},{"name":"authorization.k8s.io","versions":[{"groupVersion":"authorization.k8s.io/v1","version":"v1"},{"groupVersion":"authorization.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"authorization.k8s.io/v1","version":"v1"}},{"name":"autoscaling","versions":[{"groupVersion":"autoscaling/v1","version":"v1"},{"groupVersion":"autoscaling/v2beta1","version":"v2beta1"},{"groupVersion":"autoscaling/v2beta2","version":"v2beta2"}],"preferredVersion":{"groupVersion":"autoscaling/v1","version":"v1"}},{"name":"batch","versions":[{"groupVersion":"batch/v1","version":"v1"},{"groupVersion":"batch/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"batch/v1","version":"v1"}},{"name":"certificates.k8s.io","versions":[{"groupVersion":"certificates.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"certificates.k8s.io/v1beta1","version":"v1beta1"}},{"name":"networking.k8s.io","versions":[{"groupVersion":"networking.k8s.io/v1","version":"v1"},{"groupVersion":"networking.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"networking.k8s.io/v1","version":"v1"}},{"name":"policy","versions":[{"groupVersion":"policy/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"policy/v1beta1","version":"v1beta1"}},{"name":"rbac.authorization.k8s.io","versions":[{"groupVersion":"rbac.authorization.k8s.io/v1","version":"v1"},{"groupVersion":"rbac.authorization.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"rbac.authorization.k8s.io/v1","version":"v1"}},{"name":"storage.k8s.io","versions":[{"groupVersion":"storage.k8s.io/v1","version":"v1"},{"groupVersion":"storage.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"storage.k8s.io/v1","version":"v1"}},{"name":"admissionregistration.k8s.io","versions":[{"groupVersion":"admissionregistration.k8s.io/v1","version":"v1"},{"groupVersion":"admissionregistration.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"admissionregistration.k8s.io/v1","version":"v1"}},{"name":"apiextensions.k8s.io","versions":[{"groupVersion":"apiextensions.k8s.io/v1","version":"v1"},{"groupVersion":"apiextensions.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"apiextensions.k8s.io/v1","version":"v1"}},{"name":"scheduling.k8s.io","versions":[{"groupVersion":"scheduling.k8s.io/v1","version":"v1"},{"groupVersion":"scheduling.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"scheduling.k8s.io/v1","version":"v1"}},{"name":"coordination.k8s.io","versions":[{"groupVersion":"coordination.k8s.io/v1","version":"v1"},{"groupVersion":"coordination.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"coordination.k8s.io/v1","version":"v1"}},{"name":"node.k8s.io","versions":[{"groupVersion":"node.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"node.k8s.io/v1beta1","version":"v1beta1"}},{"name":"helm.cattle.io","versions":[{"groupVersion":"helm.cattle.io/v1","version":"v1"}],"preferredVersion":{"groupVersion":"helm.cattle.io/v1","version":"v1"}},{"name":"k3s.cattle.io","versions":[{"groupVersion":"k3s.cattle.io/v1","version":"v1"}],"preferredVersion":{"groupVersion":"k3s.cattle.io/v1","version":"v1"}},{"name":"metrics.k8s.io","versions":[{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}}]} I1114 09:30:41.864175 22021 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.14.1 (linux/amd64) kubernetes/b739410" -H "Authorization: Basic YWRtaW46MmQzYTY4YTI4NTE4ZDQ5MGQ1NWI5MTQxNWYwNjEyZGU=" 'https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1/pods' I1114 09:30:41.866458 22021 round_trippers.go:438] GET https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1/pods 200 OK in 1 milliseconds I1114 09:30:41.866488 22021 round_trippers.go:444] Response Headers: I1114 09:30:41.866496 22021 round_trippers.go:447] Date: Thu, 14 Nov 2019 17:30:41 GMT I1114 09:30:41.866508 22021 round_trippers.go:447] Cache-Control: no-cache, private I1114 09:30:41.866515 22021 round_trippers.go:447] Content-Length: 135 I1114 09:30:41.866524 22021 round_trippers.go:447] Content-Type: application/json I1114 09:30:41.866585 22021 request.go:942] Response Body: {"kind":"PodMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/pods"},"items":[]} I1114 09:30:41.867576 22021 round_trippers.go:419] curl -k -v -XGET -H "Authorization: Basic YWRtaW46MmQzYTY4YTI4NTE4ZDQ5MGQ1NWI5MTQxNWYwNjEyZGU=" -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.14.1 (linux/amd64) kubernetes/b739410" 'https://127.0.0.1:6443/api/v1/namespaces/default/pods' I1114 09:30:41.869295 22021 round_trippers.go:438] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I1114 09:30:41.869325 22021 round_trippers.go:444] Response Headers: I1114 09:30:41.869334 22021 round_trippers.go:447] Content-Type: application/json I1114 09:30:41.869341 22021 round_trippers.go:447] Date: Thu, 14 Nov 2019 17:30:41 GMT I1114 09:30:41.869348 22021 round_trippers.go:447] Cache-Control: no-cache, private I1114 09:30:41.869417 22021 request.go:942] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"994"},"items":[{"metadata":{"name":"php-apache-79544c9bd9-lqvp7","generateName":"php-apache-79544c9bd9-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/php-apache-79544c9bd9-lqvp7","uid":"2157dbca-18a7-4530-8df7-f44374b1d266","resourceVersion":"723","creationTimestamp":"2019-11-14T17:22:43Z","labels":{"pod-template-hash":"79544c9bd9","run":"php-apache"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"php-apache-79544c9bd9","uid":"1a9df069-3e19-404f-b80f-ed02d5f35cce","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-5xsf4","secret":{"secretName":"default-token-5xsf4","defaultMode":420}}],"containers":[{"name":"php-apache","image":"k8s.gcr.io/hpa-example","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{"limits":{"cpu":"500m"},"requests":{"cpu":"200m"}},"volumeMounts":[{"name":"default-token-5xsf4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k3s-1","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-11-14T17:22:43Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-11-14T17:24:47Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-11-14T17:24:47Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-11-14T17:22:43Z"}],"hostIP":"192.168.1.111","podIP":"10.42.0.10","podIPs":[{"ip":"10.42.0.10"}],"startTime":"2019-11-14T17:22:43Z","containerStatuses":[{"name":"php-apache","state":{"running":{"startedAt":"2019-11-14T17:24:46Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/hpa-example:latest","imageID":"docker-pullable://k8s.gcr.io/hpa-example@sha256:581697a37f0e136db86d6b30392f0db40ce99c8248a7044c770012f4e8491544","containerID":"docker://8af6c6d310638195527c4cc946498b2055b427f0e72c89f767f186d9a7954102","started":true}],"qosClass":"Burstable"}}]} W1114 09:30:41.880114 22021 top_pod.go:259] Metrics not available for pod default/php-apache-79544c9bd9-lqvp7, age: 7m58.880103089s F1114 09:30:41.880184 22021 helpers.go:114] error: Metrics not available for pod default/php-apache-79544c9bd9-lqvp7, age: 7m58.880103089s ```
wglambert commented 4 years ago

On v1.0.0-rc3 without installing kube-metrics

Setup ```console $ curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.0.0-rc3 INSTALL_K3S_EXEC="server" sh -s - --docker --kube-apiserver-arg=enable-admission-plugins=LimitRanger [INFO] Using v1.0.0-rc3 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.0.0-rc3/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.0.0-rc3/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Skipping /usr/local/bin/kubectl symlink to k3s, command exists in PATH at /usr/bin/kubectl [INFO] Skipping /usr/local/bin/crictl symlink to k3s, command exists in PATH at /usr/bin/crictl [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s $ export KUBECONFIG=$(locate k3s.yaml) $ sudo chmod 777 $KUBECONFIG $ kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. service/php-apache created deployment.apps/php-apache created $ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 horizontalpodautoscaler.autoscaling/php-apache autoscaled ```
$ k3s -v
k3s version v1.0.0-rc3 (4a267279)

$ kubectl top no
NAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k3s-1   295m         7%     1422Mi          21%

$ kubectl top po -A
W1114 09:39:29.892289   31589 top_pod.go:259] Metrics not available for pod default/php-apache-79544c9bd9-s5x2z, age: 2m18.892281016s
error: Metrics not available for pod default/php-apache-79544c9bd9-s5x2z, age: 2m18.892281016s
Probing ```console $ kubectl top po -A --v=10 I1114 09:43:02.208319 1146 loader.go:359] Config loaded from file /etc/rancher/k3s/k3s.yaml I1114 09:43:02.210216 1146 round_trippers.go:419] curl -k -v -XGET -H "Authorization: Basic YWRtaW46ODhlYTQ3NGM2ODA2ZWZlNTU1OWVlNDIxMTAwYTkyZGM=" -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.14.1 (linux/amd64) kubernetes/b739410" 'https://127.0.0.1:6443/api?timeout=32s' I1114 09:43:02.220756 1146 round_trippers.go:438] GET https://127.0.0.1:6443/api?timeout=32s 200 OK in 10 milliseconds I1114 09:43:02.220784 1146 round_trippers.go:444] Response Headers: I1114 09:43:02.220793 1146 round_trippers.go:447] Cache-Control: no-cache, private I1114 09:43:02.220816 1146 round_trippers.go:447] Content-Type: application/json I1114 09:43:02.220828 1146 round_trippers.go:447] Date: Thu, 14 Nov 2019 17:43:02 GMT I1114 09:43:02.220840 1146 round_trippers.go:447] Content-Length: 136 I1114 09:43:02.220923 1146 request.go:942] Response Body: {"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"0.0.0.0/0","serverAddress":"192.168.1.111:6443"}]} I1114 09:43:02.221302 1146 round_trippers.go:419] curl -k -v -XGET -H "Authorization: Basic YWRtaW46ODhlYTQ3NGM2ODA2ZWZlNTU1OWVlNDIxMTAwYTkyZGM=" -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.14.1 (linux/amd64) kubernetes/b739410" 'https://127.0.0.1:6443/apis?timeout=32s' I1114 09:43:02.222061 1146 round_trippers.go:438] GET https://127.0.0.1:6443/apis?timeout=32s 200 OK in 0 milliseconds I1114 09:43:02.222081 1146 round_trippers.go:444] Response Headers: I1114 09:43:02.222086 1146 round_trippers.go:447] Content-Type: application/json I1114 09:43:02.222090 1146 round_trippers.go:447] Date: Thu, 14 Nov 2019 17:43:02 GMT I1114 09:43:02.222107 1146 round_trippers.go:447] Cache-Control: no-cache, private I1114 09:43:02.222172 1146 request.go:942] Response Body: {"kind":"APIGroupList","apiVersion":"v1","groups":[{"name":"apiregistration.k8s.io","versions":[{"groupVersion":"apiregistration.k8s.io/v1","version":"v1"},{"groupVersion":"apiregistration.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"apiregistration.k8s.io/v1","version":"v1"}},{"name":"extensions","versions":[{"groupVersion":"extensions/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"extensions/v1beta1","version":"v1beta1"}},{"name":"apps","versions":[{"groupVersion":"apps/v1","version":"v1"}],"preferredVersion":{"groupVersion":"apps/v1","version":"v1"}},{"name":"events.k8s.io","versions":[{"groupVersion":"events.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"events.k8s.io/v1beta1","version":"v1beta1"}},{"name":"authentication.k8s.io","versions":[{"groupVersion":"authentication.k8s.io/v1","version":"v1"},{"groupVersion":"authentication.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"authentication.k8s.io/v1","version":"v1"}},{"name":"authorization.k8s.io","versions":[{"groupVersion":"authorization.k8s.io/v1","version":"v1"},{"groupVersion":"authorization.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"authorization.k8s.io/v1","version":"v1"}},{"name":"autoscaling","versions":[{"groupVersion":"autoscaling/v1","version":"v1"},{"groupVersion":"autoscaling/v2beta1","version":"v2beta1"},{"groupVersion":"autoscaling/v2beta2","version":"v2beta2"}],"preferredVersion":{"groupVersion":"autoscaling/v1","version":"v1"}},{"name":"batch","versions":[{"groupVersion":"batch/v1","version":"v1"},{"groupVersion":"batch/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"batch/v1","version":"v1"}},{"name":"certificates.k8s.io","versions":[{"groupVersion":"certificates.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"certificates.k8s.io/v1beta1","version":"v1beta1"}},{"name":"networking.k8s.io","versions":[{"groupVersion":"networking.k8s.io/v1","version":"v1"},{"groupVersion":"networking.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"networking.k8s.io/v1","version":"v1"}},{"name":"policy","versions":[{"groupVersion":"policy/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"policy/v1beta1","version":"v1beta1"}},{"name":"rbac.authorization.k8s.io","versions":[{"groupVersion":"rbac.authorization.k8s.io/v1","version":"v1"},{"groupVersion":"rbac.authorization.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"rbac.authorization.k8s.io/v1","version":"v1"}},{"name":"storage.k8s.io","versions":[{"groupVersion":"storage.k8s.io/v1","version":"v1"},{"groupVersion":"storage.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"storage.k8s.io/v1","version":"v1"}},{"name":"admissionregistration.k8s.io","versions":[{"groupVersion":"admissionregistration.k8s.io/v1","version":"v1"},{"groupVersion":"admissionregistration.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"admissionregistration.k8s.io/v1","version":"v1"}},{"name":"apiextensions.k8s.io","versions":[{"groupVersion":"apiextensions.k8s.io/v1","version":"v1"},{"groupVersion":"apiextensions.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"apiextensions.k8s.io/v1","version":"v1"}},{"name":"scheduling.k8s.io","versions":[{"groupVersion":"scheduling.k8s.io/v1","version":"v1"},{"groupVersion":"scheduling.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"scheduling.k8s.io/v1","version":"v1"}},{"name":"coordination.k8s.io","versions":[{"groupVersion":"coordination.k8s.io/v1","version":"v1"},{"groupVersion":"coordination.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"coordination.k8s.io/v1","version":"v1"}},{"name":"node.k8s.io","versions":[{"groupVersion":"node.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"node.k8s.io/v1beta1","version":"v1beta1"}},{"name":"helm.cattle.io","versions":[{"groupVersion":"helm.cattle.io/v1","version":"v1"}],"preferredVersion":{"groupVersion":"helm.cattle.io/v1","version":"v1"}},{"name":"k3s.cattle.io","versions":[{"groupVersion":"k3s.cattle.io/v1","version":"v1"}],"preferredVersion":{"groupVersion":"k3s.cattle.io/v1","version":"v1"}},{"name":"metrics.k8s.io","versions":[{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}}]} I1114 09:43:02.222562 1146 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.14.1 (linux/amd64) kubernetes/b739410" -H "Authorization: Basic YWRtaW46ODhlYTQ3NGM2ODA2ZWZlNTU1OWVlNDIxMTAwYTkyZGM=" 'https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1/pods' I1114 09:43:02.224667 1146 round_trippers.go:438] GET https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1/pods 200 OK in 2 milliseconds I1114 09:43:02.224697 1146 round_trippers.go:444] Response Headers: I1114 09:43:02.224702 1146 round_trippers.go:447] Content-Length: 135 I1114 09:43:02.224706 1146 round_trippers.go:447] Content-Type: application/json I1114 09:43:02.224709 1146 round_trippers.go:447] Date: Thu, 14 Nov 2019 17:43:02 GMT I1114 09:43:02.224713 1146 round_trippers.go:447] Cache-Control: no-cache, private I1114 09:43:02.224765 1146 request.go:942] Response Body: {"kind":"PodMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/pods"},"items":[]} I1114 09:43:02.225602 1146 round_trippers.go:419] curl -k -v -XGET -H "Accept: application/json, */*" -H "Authorization: Basic YWRtaW46ODhlYTQ3NGM2ODA2ZWZlNTU1OWVlNDIxMTAwYTkyZGM=" -H "User-Agent: kubectl/v1.14.1 (linux/amd64) kubernetes/b739410" 'https://127.0.0.1:6443/api/v1/namespaces/default/pods' I1114 09:43:02.227295 1146 round_trippers.go:438] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds I1114 09:43:02.227314 1146 round_trippers.go:444] Response Headers: I1114 09:43:02.227319 1146 round_trippers.go:447] Cache-Control: no-cache, private I1114 09:43:02.227323 1146 round_trippers.go:447] Content-Type: application/json I1114 09:43:02.227327 1146 round_trippers.go:447] Date: Thu, 14 Nov 2019 17:43:02 GMT I1114 09:43:02.227359 1146 request.go:942] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"825"},"items":[{"metadata":{"name":"php-apache-79544c9bd9-s5x2z","generateName":"php-apache-79544c9bd9-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/php-apache-79544c9bd9-s5x2z","uid":"f9f6e407-6090-449b-a124-39f1f47e7cae","resourceVersion":"549","creationTimestamp":"2019-11-14T17:37:11Z","labels":{"pod-template-hash":"79544c9bd9","run":"php-apache"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"php-apache-79544c9bd9","uid":"aba9bf12-c046-45ea-87bc-f787fdde45f5","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-wll7k","secret":{"secretName":"default-token-wll7k","defaultMode":420}}],"containers":[{"name":"php-apache","image":"k8s.gcr.io/hpa-example","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{"limits":{"cpu":"500m"},"requests":{"cpu":"200m"}},"volumeMounts":[{"name":"default-token-wll7k","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k3s-1","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-11-14T17:37:11Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-11-14T17:37:17Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-11-14T17:37:17Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-11-14T17:37:11Z"}],"hostIP":"192.168.1.111","podIP":"10.42.0.8","podIPs":[{"ip":"10.42.0.8"}],"startTime":"2019-11-14T17:37:11Z","containerStatuses":[{"name":"php-apache","state":{"running":{"startedAt":"2019-11-14T17:37:16Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/hpa-example:latest","imageID":"docker-pullable://k8s.gcr.io/hpa-example@sha256:581697a37f0e136db86d6b30392f0db40ce99c8248a7044c770012f4e8491544","containerID":"docker://f725360bdb354fa19fdd2b027f245da8bcdc917bb90e76339f54eb099513aafe","started":true}],"qosClass":"Burstable"}}]} W1114 09:43:02.231750 1146 top_pod.go:259] Metrics not available for pod default/php-apache-79544c9bd9-s5x2z, age: 5m51.231744764s F1114 09:43:02.231786 1146 helpers.go:114] error: Metrics not available for pod default/php-apache-79544c9bd9-s5x2z, age: 5m51.231744764s $ kubectl logs -n kube-system metrics-server-6d684c7b5-rrpvr | tail -n 10 E1114 17:42:51.762201 1 reststorage.go:160] unable to fetch pod metrics for pod default/php-apache-79544c9bd9-s5x2z: no metrics known for pod E1114 17:43:02.223767 1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/local-path-provisioner-58fb86bdfd-w6rj2: no metrics known for pod E1114 17:43:02.223796 1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/traefik-65bccdc4bd-vc27h: no metrics known for pod E1114 17:43:02.223804 1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/svclb-traefik-xmkxn: no metrics known for pod E1114 17:43:02.223810 1 reststorage.go:160] unable to fetch pod metrics for pod default/php-apache-79544c9bd9-s5x2z: no metrics known for pod E1114 17:43:02.223818 1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/coredns-d798c9dd-hbsw7: no metrics known for pod E1114 17:43:02.223824 1 reststorage.go:160] unable to fetch pod metrics for pod kube-system/metrics-server-6d684c7b5-rrpvr: no metrics known for pod E1114 17:43:06.409970 1 reststorage.go:160] unable to fetch pod metrics for pod default/php-apache-79544c9bd9-s5x2z: no metrics known for pod E1114 17:43:21.771053 1 reststorage.go:160] unable to fetch pod metrics for pod default/php-apache-79544c9bd9-s5x2z: no metrics known for pod E1114 17:43:36.413070 1 reststorage.go:160] unable to fetch pod metrics for pod default/php-apache-79544c9bd9-s5x2z: no metrics known for pod $ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache /50% 1 10 1 6m37s $ kubectl describe hpa php-apache Name: php-apache Namespace: default Labels: Annotations: CreationTimestamp: Thu, 14 Nov 2019 09:37:17 -0800 Reference: Deployment/php-apache Metrics: ( current / target ) resource cpu on pods (as a percentage of request): / 50% Min replicas: 1 Max replicas: 10 Deployment pods: 1 current / 0 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: no metrics returned from resource metrics API Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedComputeMetricsReplicas 3m39s (x12 over 6m27s) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API Warning FailedGetResourceMetric 83s (x21 over 6m27s) horizontal-pod-autoscaler unable to get metrics for resource cpu: no metrics returned from resource metrics API ```
erikwilson commented 4 years ago

What OS are you using? Does curl https://raw.githubusercontent.com/rancher/k3s/master/contrib/util/check-config.sh | sh - give any helpful info about missing kernel modules?

wglambert commented 4 years ago

On the VM Ubuntu 18.04.1 On the host (Used for the first example) 18.04.3

In the VM

```console $ curl https://raw.githubusercontent.com/rancher/k3s/master/contrib/util/check-config.sh | sh - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0 Verifying binaries in .: - sha256sum: sha256sums unavailable 100 12682 100 12682 0 0 2252 0 0:00:05 0:00:05 --:--:-- 3081 - links: link list unavailable System: - /sbin iptables v1.6.1: older than v1.8 - swap: should be disabled - routes: ok Limits: - /proc/sys/kernel/keys/root_maxkeys: 1000000 modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-50-generic info: reading kernel config from /boot/config-4.15.0-50-generic ... Generally Necessary: - cgroup hierarchy: properly mounted [/sys/fs/cgroup] - apparmor: enabled and tools installed - CONFIG_NAMESPACES: enabled - CONFIG_NET_NS: enabled - CONFIG_PID_NS: enabled - CONFIG_IPC_NS: enabled - CONFIG_UTS_NS: enabled - CONFIG_CGROUPS: enabled - CONFIG_CGROUP_CPUACCT: enabled - CONFIG_CGROUP_DEVICE: enabled - CONFIG_CGROUP_FREEZER: enabled - CONFIG_CGROUP_SCHED: enabled - CONFIG_CPUSETS: enabled - CONFIG_MEMCG: enabled - CONFIG_KEYS: enabled - CONFIG_VETH: enabled (as module) - CONFIG_BRIDGE: enabled (as module) - CONFIG_BRIDGE_NETFILTER: enabled (as module) - CONFIG_NF_NAT_IPV4: enabled (as module) - CONFIG_IP_NF_FILTER: enabled (as module) - CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module) - CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module) - CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module) - CONFIG_NETFILTER_XT_MATCH_IPVS: enabled (as module) - CONFIG_IP_NF_NAT: enabled (as module) - CONFIG_NF_NAT: enabled (as module) - CONFIG_NF_NAT_NEEDED: enabled - CONFIG_POSIX_MQUEUE: enabled Optional Features: - CONFIG_USER_NS: enabled - CONFIG_SECCOMP: enabled - CONFIG_CGROUP_PIDS: enabled - CONFIG_BLK_CGROUP: enabled - CONFIG_BLK_DEV_THROTTLING: enabled - CONFIG_CGROUP_PERF: enabled - CONFIG_CGROUP_HUGETLB: enabled - CONFIG_NET_CLS_CGROUP: enabled (as module) - CONFIG_CGROUP_NET_PRIO: enabled - CONFIG_CFS_BANDWIDTH: enabled - CONFIG_FAIR_GROUP_SCHED: enabled - CONFIG_RT_GROUP_SCHED: missing - CONFIG_IP_NF_TARGET_REDIRECT: enabled (as module) - CONFIG_IP_VS: enabled (as module) - CONFIG_IP_VS_NFCT: enabled - CONFIG_IP_VS_PROTO_TCP: enabled - CONFIG_IP_VS_PROTO_UDP: enabled - CONFIG_IP_VS_RR: enabled (as module) - CONFIG_EXT4_FS: enabled - CONFIG_EXT4_FS_POSIX_ACL: enabled - CONFIG_EXT4_FS_SECURITY: enabled - Network Drivers: - "overlay": - CONFIG_VXLAN: enabled (as module) Optional (for encrypted networks): - CONFIG_CRYPTO: enabled - CONFIG_CRYPTO_AEAD: enabled - CONFIG_CRYPTO_GCM: enabled - CONFIG_CRYPTO_SEQIV: enabled - CONFIG_CRYPTO_GHASH: enabled - CONFIG_XFRM: enabled - CONFIG_XFRM_USER: enabled (as module) - CONFIG_XFRM_ALGO: enabled (as module) - CONFIG_INET_ESP: enabled (as module) - CONFIG_INET_XFRM_MODE_TRANSPORT: enabled (as module) - Storage Drivers: - "overlay": - CONFIG_OVERLAY_FS: enabled (as module) STATUS: pass ```

On the host

```console $ curl https://raw.githubusercontent.com/rancher/k3s/master/contrib/util/check-config.sh | sh - % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Verifying binaries in .: - sha256sum: sha256sums unavailable - links: link list unavailable System: 100 12682 100 12682 0 0 115k 0 --:--:-- --:--:-- --:--:-- 115k - /sbin iptables v1.6.1: older than v1.8 - swap: should be disabled - routes: ok Limits: - /proc/sys/kernel/keys/root_maxkeys: 1000000 modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-65-generic info: reading kernel config from /boot/config-4.15.0-65-generic ... Generally Necessary: - cgroup hierarchy: properly mounted [/sys/fs/cgroup] - /sbin/apparmor_parser apparmor: enabled and tools installed - CONFIG_NAMESPACES: enabled - CONFIG_NET_NS: enabled - CONFIG_PID_NS: enabled - CONFIG_IPC_NS: enabled - CONFIG_UTS_NS: enabled - CONFIG_CGROUPS: enabled - CONFIG_CGROUP_CPUACCT: enabled - CONFIG_CGROUP_DEVICE: enabled - CONFIG_CGROUP_FREEZER: enabled - CONFIG_CGROUP_SCHED: enabled - CONFIG_CPUSETS: enabled - CONFIG_MEMCG: enabled - CONFIG_KEYS: enabled - CONFIG_VETH: enabled (as module) - CONFIG_BRIDGE: enabled (as module) - CONFIG_BRIDGE_NETFILTER: enabled (as module) - CONFIG_NF_NAT_IPV4: enabled (as module) - CONFIG_IP_NF_FILTER: enabled (as module) - CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module) - CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module) - CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module) - CONFIG_NETFILTER_XT_MATCH_IPVS: enabled (as module) - CONFIG_IP_NF_NAT: enabled (as module) - CONFIG_NF_NAT: enabled (as module) - CONFIG_NF_NAT_NEEDED: enabled - CONFIG_POSIX_MQUEUE: enabled Optional Features: - CONFIG_USER_NS: enabled - CONFIG_SECCOMP: enabled - CONFIG_CGROUP_PIDS: enabled - CONFIG_BLK_CGROUP: enabled - CONFIG_BLK_DEV_THROTTLING: enabled - CONFIG_CGROUP_PERF: enabled - CONFIG_CGROUP_HUGETLB: enabled - CONFIG_NET_CLS_CGROUP: enabled (as module) - CONFIG_CGROUP_NET_PRIO: enabled - CONFIG_CFS_BANDWIDTH: enabled - CONFIG_FAIR_GROUP_SCHED: enabled - CONFIG_RT_GROUP_SCHED: missing - CONFIG_IP_NF_TARGET_REDIRECT: enabled (as module) - CONFIG_IP_VS: enabled (as module) - CONFIG_IP_VS_NFCT: enabled - CONFIG_IP_VS_PROTO_TCP: enabled - CONFIG_IP_VS_PROTO_UDP: enabled - CONFIG_IP_VS_RR: enabled (as module) - CONFIG_EXT4_FS: enabled - CONFIG_EXT4_FS_POSIX_ACL: enabled - CONFIG_EXT4_FS_SECURITY: enabled - Network Drivers: - "overlay": - CONFIG_VXLAN: enabled (as module) Optional (for encrypted networks): - CONFIG_CRYPTO: enabled - CONFIG_CRYPTO_AEAD: enabled - CONFIG_CRYPTO_GCM: enabled - CONFIG_CRYPTO_SEQIV: enabled - CONFIG_CRYPTO_GHASH: enabled - CONFIG_XFRM: enabled - CONFIG_XFRM_USER: enabled (as module) - CONFIG_XFRM_ALGO: enabled (as module) - CONFIG_INET_ESP: enabled (as module) - CONFIG_INET_XFRM_MODE_TRANSPORT: enabled (as module) - Storage Drivers: - "overlay": - CONFIG_OVERLAY_FS: enabled (as module) STATUS: pass ```
wglambert commented 4 years ago

I've just tested this on a simple baremetal install and the metrics are able to fetch the pod information and the horizontal pod autoscaler works fine using the same methodology as I did in k3s

Install ```console $ sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni Reading package lists... Done Building dependency tree Reading state information... Done kubernetes-cni is already the newest version (0.7.5-00). kubernetes-cni set to manually installed. The following package was automatically installed and is no longer required: mariadb-common Use 'sudo apt autoremove' to remove it. The following packages will be upgraded: cri-tools kubeadm kubectl kubelet 4 upgraded, 0 newly installed, 0 to remove and 503 not upgraded. Need to get 47.5 MB of archives. After this operation, 1,082 kB of additional disk space will be used. Get:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 cri-tools amd64 1.13.0-00 [8,776 kB] Get:2 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubelet amd64 1.16.3-00 [20.7 MB] Get:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubectl amd64 1.16.3-00 [9,233 kB] Get:4 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.16.3-00 [8,762 kB] Fetched 47.5 MB in 5s (9,535 kB/s) (Reading database ... 188986 files and directories currently installed.) Preparing to unpack .../cri-tools_1.13.0-00_amd64.deb ... Unpacking cri-tools (1.13.0-00) over (1.12.0-00) ... Preparing to unpack .../kubelet_1.16.3-00_amd64.deb ... Unpacking kubelet (1.16.3-00) over (1.14.3-00) ... Preparing to unpack .../kubectl_1.16.3-00_amd64.deb ... Unpacking kubectl (1.16.3-00) over (1.14.1-00) ... Preparing to unpack .../kubeadm_1.16.3-00_amd64.deb ... Unpacking kubeadm (1.16.3-00) over (1.14.3-00) ... Setting up cri-tools (1.13.0-00) ... Setting up kubelet (1.16.3-00) ... Setting up kubectl (1.16.3-00) ... Setting up kubeadm (1.16.3-00) ... $ sudo kubeadm init [init] Using Kubernetes version: v1.16.3 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.2. Latest validated version: 18.09 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [ayanami-clone kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.111] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [ayanami-clone localhost] and IPs [192.168.1.111 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [ayanami-clone localhost] and IPs [192.168.1.111 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 24.007413 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node ayanami-clone as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node ayanami-clone as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 5m4dum.c8jt1fe9tnaeubew [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config $ export KUBECONFIG=~/.kube/config $ kubectl taint node ayanami-clone node-role.kubernetes.io/master:NoSchedule- node/ayanami-clone untainted $ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created role.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.apps/weave-net created ```
Apply kube-metrics, create pod & hpa ```console $ kubectl apply -f metrics-server/deploy/1.8+/ clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created serviceaccount/metrics-server created deployment.apps/metrics-server created service/metrics-server created clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created $ kubectl edit deploy -n kube-system metrics-server deployment.apps/metrics-server edited $ kubectl get deploy -n kube-system metrics-server -o json | jq .spec.template.spec.containers[].args[ "--cert-dir=/tmp", "--secure-port=4443", "--kubelet-insecure-tls", "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname" ] $ kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. service/php-apache created deployment.apps/php-apache created $ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 horizontalpodautoscaler.autoscaling/php-apache autoscaled ```
$ kubectl top no
NAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
ayanami-clone   339m         8%     1412Mi          21%

$ kubectl top po
NAME                          CPU(cores)   MEMORY(bytes)   
php-apache-79544c9bd9-n8lqt   1m           9Mi

$ kubectl get hpa
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   0%/50%    1         10        1          103m

Doing the load generation from https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#increase-load

$ kubectl run -i --tty load-generator --image=busybox /bin/sh
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
/ # while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
OK!OK!OK!OK!OK!OK!OK!OK!OK!OK
. . .
$ kubectl get hpa
NAME         REFERENCE               TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   250%/50%   1         10        5          107m

$ kubectl top po -A
NAMESPACE     NAME                                    CPU(cores)   MEMORY(bytes)   
default       load-generator-5fb4fb465b-fjxsr         6m           1Mi             
default       php-apache-79544c9bd9-jwwq7             94m          12Mi            
default       php-apache-79544c9bd9-n8lqt             104m         12Mi            
default       php-apache-79544c9bd9-tpcdq             95m          12Mi            
default       php-apache-79544c9bd9-vnpcx             107m         12Mi            
default       php-apache-79544c9bd9-z89d9             94m          12Mi            
kube-system   coredns-5644d7b6d9-bc946                7m           7Mi             
kube-system   coredns-5644d7b6d9-jthlm                7m           7Mi             
kube-system   etcd-ayanami-clone                      19m          25Mi            
kube-system   kube-apiserver-ayanami-clone            44m          264Mi           
kube-system   kube-controller-manager-ayanami-clone   17m          38Mi            
kube-system   kube-proxy-9wfvb                        1m           11Mi            
kube-system   kube-scheduler-ayanami-clone            2m           12Mi            
kube-system   metrics-server-7557fbfb7d-r4zbh         2m           12Mi            
kube-system   weave-net-s4wzt                         1m           57Mi

$ kubectl top po
NAME                              CPU(cores)   MEMORY(bytes)   
load-generator-5fb4fb465b-fjxsr   6m           1Mi             
php-apache-79544c9bd9-jwwq7       94m          12Mi            
php-apache-79544c9bd9-n8lqt       104m         12Mi            
php-apache-79544c9bd9-tpcdq       95m          12Mi            
php-apache-79544c9bd9-vnpcx       107m         12Mi            
php-apache-79544c9bd9-z89d9       94m          12Mi
erikwilson commented 4 years ago

Would you mind trying k3s v1.0.0-rc4? That upgrades to k8s 1.16.3 which appears to have some fixes for metrics-server.

wglambert commented 4 years ago

Same issue

Install and create pod/hpa ```console $ curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.0.0-rc4 INSTALL_K3S_EXEC="server" sh -s - --docker [INFO] Using v1.0.0-rc4 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.0.0-rc4/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.0.0-rc4/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Skipping /usr/local/bin/kubectl symlink to k3s, command exists in PATH at /usr/bin/kubectl [INFO] Skipping /usr/local/bin/crictl symlink to k3s, command exists in PATH at /usr/bin/crictl [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s $ export KUBECONFIG=$(locate k3s.yaml) $ sudo chmod 777 $KUBECONFIG $ kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. service/php-apache created deployment.apps/php-apache created $ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 horizontalpodautoscaler.autoscaling/php-apache autoscaled ```
$ k3s -v
k3s version v1.0.0-rc4 (fe4b9caf)

$ kubectl top no
NAME      CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
ayanami   836m         10%    4704Mi          40%

$ kubectl top po
W1115 14:58:50.238592    5960 top_pod.go:266] Metrics not available for pod default/php-apache-79544c9bd9-z8wsv, age: 5m19.238587172s
error: Metrics not available for pod default/php-apache-79544c9bd9-z8wsv, age: 5m19.238587172s

$ kubectl get hpa
NAME         REFERENCE               TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   <unknown>/50%   1         10        1          5m16s
greenenergy commented 4 years ago

I would like to chime in -- I've installed a k3s cluster with three x86 agent nodes and a raspberry pi as the master. The agents I installed with: curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="agent --node-label=istio=enabled --token-file=/home/cfox/kube/token.txt --server https://knodemaster.localdomain:6443" sh -

When I try 'top node' I get this:

$ kubectl top node
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

(top pod returns an identical error)

wglambert commented 4 years ago

Some more troubleshooting

$ kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
php-apache-79544c9bd9-7bgm9   1/1     Running   0          108m

$ kubectl top po --v=10         
I1205 16:42:40.640845   12509 loader.go:375] Config loaded from file:  /etc/rancher/k3s/k3s.yaml
I1205 16:42:40.684173   12509 round_trippers.go:423] curl -k -v -XGET  -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" -H "Authorization: Basic YWRtaW46ODUzYTIxOTZlNThmOWJiYmU3YWZiYWQ3YWU3Y2YwYjM=" -H "Accept: application/json, */*" 'https://127.0.0.1:6443/api?timeout=32s'
I1205 16:42:40.700495   12509 round_trippers.go:443] GET https://127.0.0.1:6443/api?timeout=32s 200 OK in 16 milliseconds
I1205 16:42:40.700546   12509 round_trippers.go:449] Response Headers:
I1205 16:42:40.700557   12509 round_trippers.go:452]     Cache-Control: no-cache, private
I1205 16:42:40.700567   12509 round_trippers.go:452]     Content-Type: application/json
I1205 16:42:40.700575   12509 round_trippers.go:452]     Date: Fri, 06 Dec 2019 00:42:40 GMT
I1205 16:42:40.700583   12509 round_trippers.go:452]     Content-Length: 135
I1205 16:42:40.700670   12509 request.go:968] Response Body: {"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"0.0.0.0/0","serverAddress":"172.29.0.218:6443"}]}
I1205 16:42:40.700993   12509 round_trippers.go:423] curl -k -v -XGET  -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" -H "Authorization: Basic YWRtaW46ODUzYTIxOTZlNThmOWJiYmU3YWZiYWQ3YWU3Y2YwYjM=" -H "Accept: application/json, */*" 'https://127.0.0.1:6443/apis?timeout=32s'
I1205 16:42:40.701604   12509 round_trippers.go:443] GET https://127.0.0.1:6443/apis?timeout=32s 200 OK in 0 milliseconds
I1205 16:42:40.701636   12509 round_trippers.go:449] Response Headers:
I1205 16:42:40.701643   12509 round_trippers.go:452]     Content-Type: application/json
I1205 16:42:40.701649   12509 round_trippers.go:452]     Date: Fri, 06 Dec 2019 00:42:40 GMT
I1205 16:42:40.701655   12509 round_trippers.go:452]     Cache-Control: no-cache, private
I1205 16:42:40.701781   12509 request.go:968] Response Body: {"kind":"APIGroupList","apiVersion":"v1","groups":[{"name":"apiregistration.k8s.io","versions":[{"groupVersion":"apiregistration.k8s.io/v1","version":"v1"},{"groupVersion":"apiregistration.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"apiregistration.k8s.io/v1","version":"v1"}},{"name":"extensions","versions":[{"groupVersion":"extensions/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"extensions/v1beta1","version":"v1beta1"}},{"name":"apps","versions":[{"groupVersion":"apps/v1","version":"v1"}],"preferredVersion":{"groupVersion":"apps/v1","version":"v1"}},{"name":"events.k8s.io","versions":[{"groupVersion":"events.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"events.k8s.io/v1beta1","version":"v1beta1"}},{"name":"authentication.k8s.io","versions":[{"groupVersion":"authentication.k8s.io/v1","version":"v1"},{"groupVersion":"authentication.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"authentication.k8s.io/v1","version":"v1"}},{"name":"authorization.k8s.io","versions":[{"groupVersion":"authorization.k8s.io/v1","version":"v1"},{"groupVersion":"authorization.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"authorization.k8s.io/v1","version":"v1"}},{"name":"autoscaling","versions":[{"groupVersion":"autoscaling/v1","version":"v1"},{"groupVersion":"autoscaling/v2beta1","version":"v2beta1"},{"groupVersion":"autoscaling/v2beta2","version":"v2beta2"}],"preferredVersion":{"groupVersion":"autoscaling/v1","version":"v1"}},{"name":"batch","versions":[{"groupVersion":"batch/v1","version":"v1"},{"groupVersion":"batch/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"batch/v1","version":"v1"}},{"name":"certificates.k8s.io","versions":[{"groupVersion":"certificates.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"certificates.k8s.io/v1beta1","version":"v1beta1"}},{"name":"networking.k8s.io","versions":[{"groupVersion":"networking.k8s.io/v1","version":"v1"},{"groupVersion":"networking.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"networking.k8s.io/v1","version":"v1"}},{"name":"policy","versions":[{"groupVersion":"policy/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"policy/v1beta1","version":"v1beta1"}},{"name":"rbac.authorization.k8s.io","versions":[{"groupVersion":"rbac.authorization.k8s.io/v1","version":"v1"},{"groupVersion":"rbac.authorization.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"rbac.authorization.k8s.io/v1","version":"v1"}},{"name":"storage.k8s.io","versions":[{"groupVersion":"storage.k8s.io/v1","version":"v1"},{"groupVersion":"storage.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"storage.k8s.io/v1","version":"v1"}},{"name":"admissionregistration.k8s.io","versions":[{"groupVersion":"admissionregistration.k8s.io/v1","version":"v1"},{"groupVersion":"admissionregistration.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"admissionregistration.k8s.io/v1","version":"v1"}},{"name":"apiextensions.k8s.io","versions":[{"groupVersion":"apiextensions.k8s.io/v1","version":"v1"},{"groupVersion":"apiextensions.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"apiextensions.k8s.io/v1","version":"v1"}},{"name":"scheduling.k8s.io","versions":[{"groupVersion":"scheduling.k8s.io/v1","version":"v1"},{"groupVersion":"scheduling.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"scheduling.k8s.io/v1","version":"v1"}},{"name":"coordination.k8s.io","versions":[{"groupVersion":"coordination.k8s.io/v1","version":"v1"},{"groupVersion":"coordination.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"coordination.k8s.io/v1","version":"v1"}},{"name":"node.k8s.io","versions":[{"groupVersion":"node.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"node.k8s.io/v1beta1","version":"v1beta1"}},{"name":"helm.cattle.io","versions":[{"groupVersion":"helm.cattle.io/v1","version":"v1"}],"preferredVersion":{"groupVersion":"helm.cattle.io/v1","version":"v1"}},{"name":"k3s.cattle.io","versions":[{"groupVersion":"k3s.cattle.io/v1","version":"v1"}],"preferredVersion":{"groupVersion":"k3s.cattle.io/v1","version":"v1"}},{"name":"metrics.k8s.io","versions":[{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}}]}
I1205 16:42:40.702285   12509 round_trippers.go:423] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" -H "Authorization: Basic YWRtaW46ODUzYTIxOTZlNThmOWJiYmU3YWZiYWQ3YWU3Y2YwYjM=" 'https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1/namespaces/default/pods'
I1205 16:42:40.703922   12509 round_trippers.go:443] GET https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1/namespaces/default/pods 200 OK in 1 milliseconds
I1205 16:42:40.703940   12509 round_trippers.go:449] Response Headers:
I1205 16:42:40.703948   12509 round_trippers.go:452]     Cache-Control: no-cache, private
I1205 16:42:40.703955   12509 round_trippers.go:452]     Content-Length: 154
I1205 16:42:40.703962   12509 round_trippers.go:452]     Content-Type: application/json
I1205 16:42:40.703969   12509 round_trippers.go:452]     Date: Fri, 06 Dec 2019 00:42:40 GMT
I1205 16:42:40.704007   12509 request.go:968] Response Body: {"kind":"PodMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods"},"items":[]}
I1205 16:42:40.704803   12509 round_trippers.go:423] curl -k -v -XGET  -H "Authorization: Basic YWRtaW46ODUzYTIxOTZlNThmOWJiYmU3YWZiYWQ3YWU3Y2YwYjM=" -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" 'https://127.0.0.1:6443/api/v1/namespaces/default/pods'
I1205 16:42:40.707003   12509 round_trippers.go:443] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
I1205 16:42:40.707024   12509 round_trippers.go:449] Response Headers:
I1205 16:42:40.707031   12509 round_trippers.go:452]     Content-Type: application/json
I1205 16:42:40.707037   12509 round_trippers.go:452]     Date: Fri, 06 Dec 2019 00:42:40 GMT
I1205 16:42:40.707043   12509 round_trippers.go:452]     Cache-Control: no-cache, private
I1205 16:42:40.707105   12509 request.go:968] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"5355"},"items":[{"metadata":{"name":"php-apache-79544c9bd9-7bgm9","generateName":"php-apache-79544c9bd9-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/php-apache-79544c9bd9-7bgm9","uid":"6c7e23da-1c50-440b-837a-9c176c0a8532","resourceVersion":"451","creationTimestamp":"2019-12-05T22:54:37Z","labels":{"pod-template-hash":"79544c9bd9","run":"php-apache"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"php-apache-79544c9bd9","uid":"4048c362-6ef0-4088-bc65-426a4c1806c9","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-j9gjl","secret":{"secretName":"default-token-j9gjl","defaultMode":420}}],"containers":[{"name":"php-apache","image":"k8s.gcr.io/hpa-example","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{"limits":{"cpu":"500m"},"requests":{"cpu":"200m"}},"volumeMounts":[{"name":"default-token-j9gjl","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ayanami","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-12-05T22:54:37Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-12-05T22:54:52Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-12-05T22:54:52Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-12-05T22:54:37Z"}],"hostIP":"172.29.0.218","podIP":"10.42.0.6","podIPs":[{"ip":"10.42.0.6"}],"startTime":"2019-12-05T22:54:37Z","containerStatuses":[{"name":"php-apache","state":{"running":{"startedAt":"2019-12-05T22:54:51Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/hpa-example:latest","imageID":"docker-pullable://k8s.gcr.io/hpa-example@sha256:581697a37f0e136db86d6b30392f0db40ce99c8248a7044c770012f4e8491544","containerID":"docker://8fb06b7db1dd94d78f0dd1bc6768210d41ab901e64fdcce012d35095f3c72301","started":true}],"qosClass":"Burstable"}}]}
W1205 16:42:40.717320   12509 top_pod.go:266] Metrics not available for pod default/php-apache-79544c9bd9-7bgm9, age: 1h48m3.717310451s
F1205 16:42:40.717355   12509 helpers.go:114] error: Metrics not available for pod default/php-apache-79544c9bd9-7bgm9, age: 1h48m3.717310451s

So it does a curl -k -v . . . 'https://127.0.0.1:6443/api/v1/namespaces/default/pods' and outputs the metrics in the response body, but says there's no metrics

Whereas parsing the json in the that response body gives you the metrics

$ curl -k -v -XGET  -H "Authorization: Basic YWRtaW46ODUzYTIxOTZlNThmOWJiYmU3YWZiYWQ3YWU3Y2YwYjM=" -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" 'https://127.0.0.1:6443/api/v1/namespaces/default/pods' 2>/dev/null | jq .items[].spec.containers[].resources                     
{                                                                                                                                                                           
  "limits": {                                                                                                                                                               
    "cpu": "500m"                                                                                                                                                           
  },                                                                                                                                                                        
  "requests": {                                                                                                                                                             
    "cpu": "200m"                                                                                                                                                           
  }                                                                                                                                                                         
}

The only real difference I noticed with these requests between the kubeadm install and k3s is that k3s uses v1 and authorization whereas the kubeadm install uses v1beta1

Working kubeadm install:

kubectl top po --v=10 ```console I1205 17:04:14.002731 3858 loader.go:375] Config loaded from file: /home/rei/.kube/config I1205 17:04:14.003829 3858 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.16.3 (linux/amd64) kubernetes/b3cbbae" 'https://172.29.0.90:6443/api?timeout=32s' I1205 17:04:14.011236 3858 round_trippers.go:443] GET https://172.29.0.90:6443/api?timeout=32s 200 OK in 7 milliseconds I1205 17:04:14.011260 3858 round_trippers.go:449] Response Headers: I1205 17:04:14.011266 3858 round_trippers.go:452] Cache-Control: no-cache, private I1205 17:04:14.011272 3858 round_trippers.go:452] Content-Type: application/json I1205 17:04:14.011277 3858 round_trippers.go:452] Content-Length: 134 I1205 17:04:14.011411 3858 round_trippers.go:452] Date: Fri, 06 Dec 2019 01:04:14 GMT I1205 17:04:14.011479 3858 request.go:968] Response Body: {"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"0.0.0.0/0","serverAddress":"172.29.0.90:6443"}]} I1205 17:04:14.011821 3858 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.16.3 (linux/amd64) kubernetes/b3cbbae" 'https://172.29.0.90:6443/apis?timeout=32s' I1205 17:04:14.012661 3858 round_trippers.go:443] GET https://172.29.0.90:6443/apis?timeout=32s 200 OK in 0 milliseconds I1205 17:04:14.012683 3858 round_trippers.go:449] Response Headers: I1205 17:04:14.012690 3858 round_trippers.go:452] Cache-Control: no-cache, private I1205 17:04:14.012704 3858 round_trippers.go:452] Content-Type: application/json I1205 17:04:14.012708 3858 round_trippers.go:452] Date: Fri, 06 Dec 2019 01:04:14 GMT I1205 17:04:14.012780 3858 request.go:968] Response Body: {"kind":"APIGroupList","apiVersion":"v1","groups":[{"name":"apiregistration.k8s.io","versions":[{"groupVersion":"apiregistration.k8s.io/v1","version":"v1"},{"groupVersion":"apiregistration.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"apiregistration.k8s.io/v1","version":"v1"}},{"name":"extensions","versions":[{"groupVersion":"extensions/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"extensions/v1beta1","version":"v1beta1"}},{"name":"apps","versions":[{"groupVersion":"apps/v1","version":"v1"}],"preferredVersion":{"groupVersion":"apps/v1","version":"v1"}},{"name":"events.k8s.io","versions":[{"groupVersion":"events.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"events.k8s.io/v1beta1","version":"v1beta1"}},{"name":"authentication.k8s.io","versions":[{"groupVersion":"authentication.k8s.io/v1","version":"v1"},{"groupVersion":"authentication.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"authentication.k8s.io/v1","version":"v1"}},{"name":"authorization.k8s.io","versions":[{"groupVersion":"authorization.k8s.io/v1","version":"v1"},{"groupVersion":"authorization.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"authorization.k8s.io/v1","version":"v1"}},{"name":"autoscaling","versions":[{"groupVersion":"autoscaling/v1","version":"v1"},{"groupVersion":"autoscaling/v2beta1","version":"v2beta1"},{"groupVersion":"autoscaling/v2beta2","version":"v2beta2"}],"preferredVersion":{"groupVersion":"autoscaling/v1","version":"v1"}},{"name":"batch","versions":[{"groupVersion":"batch/v1","version":"v1"},{"groupVersion":"batch/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"batch/v1","version":"v1"}},{"name":"certificates.k8s.io","versions":[{"groupVersion":"certificates.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"certificates.k8s.io/v1beta1","version":"v1beta1"}},{"name":"networking.k8s.io","versions":[{"groupVersion":"networking.k8s.io/v1","version":"v1"},{"groupVersion":"networking.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"networking.k8s.io/v1","version":"v1"}},{"name":"policy","versions":[{"groupVersion":"policy/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"policy/v1beta1","version":"v1beta1"}},{"name":"rbac.authorization.k8s.io","versions":[{"groupVersion":"rbac.authorization.k8s.io/v1","version":"v1"},{"groupVersion":"rbac.authorization.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"rbac.authorization.k8s.io/v1","version":"v1"}},{"name":"storage.k8s.io","versions":[{"groupVersion":"storage.k8s.io/v1","version":"v1"},{"groupVersion":"storage.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"storage.k8s.io/v1","version":"v1"}},{"name":"admissionregistration.k8s.io","versions":[{"groupVersion":"admissionregistration.k8s.io/v1","version":"v1"},{"groupVersion":"admissionregistration.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"admissionregistration.k8s.io/v1","version":"v1"}},{"name":"apiextensions.k8s.io","versions":[{"groupVersion":"apiextensions.k8s.io/v1","version":"v1"},{"groupVersion":"apiextensions.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"apiextensions.k8s.io/v1","version":"v1"}},{"name":"scheduling.k8s.io","versions":[{"groupVersion":"scheduling.k8s.io/v1","version":"v1"},{"groupVersion":"scheduling.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"scheduling.k8s.io/v1","version":"v1"}},{"name":"coordination.k8s.io","versions":[{"groupVersion":"coordination.k8s.io/v1","version":"v1"},{"groupVersion":"coordination.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"coordination.k8s.io/v1","version":"v1"}},{"name":"node.k8s.io","versions":[{"groupVersion":"node.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"node.k8s.io/v1beta1","version":"v1beta1"}},{"name":"metrics.k8s.io","versions":[{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}}]} I1205 17:04:14.013303 3858 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.16.3 (linux/amd64) kubernetes/b3cbbae" 'https://172.29.0.90:6443/apis/metrics.k8s.io/v1beta1/namespaces/default/pods' I1205 17:04:14.014881 3858 round_trippers.go:443] GET https://172.29.0.90:6443/apis/metrics.k8s.io/v1beta1/namespaces/default/pods 200 OK in 1 milliseconds I1205 17:04:14.014898 3858 round_trippers.go:449] Response Headers: I1205 17:04:14.014905 3858 round_trippers.go:452] Cache-Control: no-cache, private I1205 17:04:14.015008 3858 round_trippers.go:452] Content-Type: application/json I1205 17:04:14.015014 3858 round_trippers.go:452] Date: Fri, 06 Dec 2019 01:04:14 GMT I1205 17:04:14.015018 3858 round_trippers.go:452] Content-Length: 495 I1205 17:04:14.015034 3858 request.go:968] Response Body: {"kind":"PodMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods"},"items":[{"metadata":{"name":"php-apache-79544c9bd9-4hww7","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/php-apache-79544c9bd9-4hww7","creationTimestamp":"2019-12-06T01:04:14Z"},"timestamp":"2019-12-06T01:03:09Z","window":"30s","containers":[{"name":"php-apache","usage":{"cpu":"83146n","memory":"12896Ki"}}]}]} NAME CPU(cores) MEMORY(bytes) php-apache-79544c9bd9-4hww7 1m 12Mi ```

Uses curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.16.3 (linux/amd64) kubernetes/b3cbbae" 'https://172.29.0.90:6443/apis/metrics.k8s.io/v1beta1/namespaces/default/pods'

erikwilson commented 4 years ago

Is this an issue with the kubectl version? Does k3s kubectl work?

wglambert commented 4 years ago
$ k3s kubectl top po
W1205 17:14:51.706941   30517 top_pod.go:266] Metrics not available for pod default/php-apache-79544c9bd9-7bgm9, age: 2h20m14.706931617s                                    
error: Metrics not available for pod default/php-apache-79544c9bd9-7bgm9, age: 2h20m14.706931617s

$ k3s kubectl get po
NAME                          READY   STATUS    RESTARTS   AGE                                                                                                              
php-apache-79544c9bd9-7bgm9   1/1     Running   0          140m
ffxgamer commented 4 years ago

I've got same problem. #1149 Everything is ok, but metrics cannot get pod's values of cpu and memory. Version: k3s version v1.0.0 (18bd921)

ofirmakmal commented 4 years ago

@erikwilson I just found out that the issue happens (to me at least) only when running in docker mode. export INSTALL_K3S_EXEC="--docker" Otherwise it works.

Does that gives you any pointers?

My docker version is 19.03.1.

The problem is, that I must use Docker mode... :(

VladoPortos commented 4 years ago

Same issue:

pi@mastercube:~ $ k3s -v
k3s version v1.0.0 (18bd921c

pi@mastercube:~ $ kubectl top node
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

Also some logs that might help

Dec 11 12:23:42 mastercube k3s[653]: W1211 12:23:42.226540     653 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
Dec 11 12:23:44 mastercube k3s[653]: time="2019-12-11T12:23:44.786528541+01:00" level=error msg="failed to process config: failed to process /var/lib/rancher/k3s/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.43.0.10\": provided IP is already allocated"
Dec 11 12:23:46 mastercube k3s[653]: E1211 12:23:46.720120     653 available_controller.go:416] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: 401
pi@mastercube:~ $ kubectl get service --all-namespaces
NAMESPACE     NAME             TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                                     AGE
kube-system   kube-dns         ClusterIP      10.43.0.10      <none>          53/UDP,53/TCP,9153/TCP                      5d18h
default       kubernetes       ClusterIP      10.43.0.1       <none>          443/TCP                                     5d18h
kube-system   metrics-server   ClusterIP      10.43.171.209   <none>          443/TCP                                     128m
kube-system   traefik          LoadBalancer   10.43.61.120    192.168.0.180   80:30901/TCP,443:31663/TCP,8080:31151/TCP   5d18h
pi@mastercube:~ $ kubectl get apiservices | grep metrics
v1beta1.metrics.k8s.io                 kube-system/metrics-server   False (FailedDiscoveryCheck)   130m
Status:
  Conditions:
    Last Transition Time:  2019-12-11T09:19:13Z
    Message:               failing or missing response from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: 401
    Reason:                FailedDiscoveryCheck
    Status:                False
    Type:                  Available
Events:                    <none>
pi@mastercube:~ $ kubectl logs -n kube-system metrics-server-6d684c7b5-thg2b
ect with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:15.292780       1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.231467       1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.231586       1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.231787       1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.239085       1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.244691       1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]

Well after lots of time solution was adding the cn kubernetes-proxy to allowed, as described here: https://github.com/kubernetes-sigs/metrics-server/issues/292

pi@mastercube:~ $ kubectl top po
NAME                                      CPU(cores)   MEMORY(bytes)   
nfs-client-provisioner-6cf568d56b-nsnnj   6m           5Mi    
rookieclover commented 4 years ago

Same issue at k3s version v1.0.0 --docker mode , kubectl top node is ok ,kubectl top pod is error: Metrics not available for pod

rookieclover commented 4 years ago

Same issue:

pi@mastercube:~ $ k3s -v
k3s version v1.0.0 (18bd921c

pi@mastercube:~ $ kubectl top node
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

Also some logs that might help

Dec 11 12:23:42 mastercube k3s[653]: W1211 12:23:42.226540     653 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
Dec 11 12:23:44 mastercube k3s[653]: time="2019-12-11T12:23:44.786528541+01:00" level=error msg="failed to process config: failed to process /var/lib/rancher/k3s/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for  kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.43.0.10\": provided IP is already allocated"
Dec 11 12:23:46 mastercube k3s[653]: E1211 12:23:46.720120     653 available_controller.go:416] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: 401
pi@mastercube:~ $ kubectl get service --all-namespaces
NAMESPACE     NAME             TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                                     AGE
kube-system   kube-dns         ClusterIP      10.43.0.10      <none>          53/UDP,53/TCP,9153/TCP                      5d18h
default       kubernetes       ClusterIP      10.43.0.1       <none>          443/TCP                                     5d18h
kube-system   metrics-server   ClusterIP      10.43.171.209   <none>          443/TCP                                     128m
kube-system   traefik          LoadBalancer   10.43.61.120    192.168.0.180   80:30901/TCP,443:31663/TCP,8080:31151/TCP   5d18h
pi@mastercube:~ $ kubectl get apiservices | grep metrics
v1beta1.metrics.k8s.io                 kube-system/metrics-server   False (FailedDiscoveryCheck)   130m
Status:
  Conditions:
    Last Transition Time:  2019-12-11T09:19:13Z
    Message:               failing or missing response from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: 401
    Reason:                FailedDiscoveryCheck
    Status:                False
    Type:                  Available
Events:                    <none>
pi@mastercube:~ $ kubectl logs -n kube-system metrics-server-6d684c7b5-thg2b
ect with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:15.292780       1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.231467       1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.231586       1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.231787       1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.239085       1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.244691       1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]

Well after lots of time solution was adding the cn kubernetes-proxy to allowed, as described here: kubernetes-sigs/metrics-server#292

pi@mastercube:~ $ kubectl top po
NAME                                      CPU(cores)   MEMORY(bytes)   
nfs-client-provisioner-6cf568d56b-nsnnj   6m           5Mi    

dot you run k3s server in docker mode or not?

pierdobauce commented 4 years ago

Same issue with k3s --docker mode. Any progress on this subjetc since 2 months? Thx.

onedr0p commented 4 years ago

@pierdobauce it's hard to say who is at blame here but I am thinking it's more on metrics-server than k3s. There's a bunch of issues that were closed recently (not resolved) on their GH issue tracker.

https://github.com/kubernetes-sigs/metrics-server/issues?utf8=%E2%9C%93&q=is%3Aissue+sort%3Aupdated-desc+Metrics+not+available+for+pod

I think there should be a note in the docs that metrics-server is currently not supported when using Docker CRI or something.

JoseThen commented 4 years ago

For those looking this may help you: https://github.com/kubernetes-sigs/metrics-server/issues/349

erikwilson commented 4 years ago

Is this still an issue for you in v1.18.2 @wglambert? Looks like it was fixed as part of https://github.com/rancher/k3s/issues/1554

wglambert commented 4 years ago

It seems to be working fine, following the docs here https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#create-horizontal-pod-autoscaler

The php-apache deployment is able to return integer metrics, but trying it on a lone nginx pod doesn't work?

$ kubectl get hpa
NAME         REFERENCE               TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
nginx        Deployment/nginx        <unknown>/50%   1         10        1          5m58s
php-apache   Deployment/php-apache   0%/50%          1         10        1          2m54s

journalctl logs for that nginx HPA: failed to get cpu utilization: missing request for cpu type: 'Warning' reason: 'FailedGetResourceMetric' missing request for cpu type: 'Warning' reason: 'FailedComputeMetricsReplicas' invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: missing request for cpu

But a kubectl top pod displays numbers for the CPU and MEM

$ k3s -v
k3s version v1.18.2+k3s1 (698e444a)

$ kubectl top po
NAME                    CPU(cores)   MEMORY(bytes)
nginx-f89759699-k7tzh   0m           2Mi
php-apache              1m           9Mi

But it definitely works, this issue might just be an oddity with how that nginx was deployed?

$ kubectl get hpa
NAME         REFERENCE               TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
nginx        Deployment/nginx        <unknown>/50%   1         10        1          14m
php-apache   Deployment/php-apache   251%/50%        1         10        6          11m
brandond commented 4 years ago

It looks like you didn't specify a cpu resource request for that pod. 'top pods' only cares what it's using, while the HPA wants to compare utilization to requests to figure out if it needs to scale up or down.

I've gone through and added cpu and memory requests to all my pods. Even if you're not using limits or autoscaling, the scheduler works better if it knows what to expect. The QoS stuff also requires requests (and limits, if you want to use the guaranteed class).

edenreich commented 4 years ago

ping - any update on this topic ? still following.. I'm on v1.17.4, and having the same issue:

top_pod.go:274] Metrics not available for pod x

with nodes everything works fine I can get the usage without changing anything.

brandond commented 4 years ago

@edenreich If your issue is with kubectl top pods when using Docker, I believe this was fixed in https://github.com/rancher/k3s/pull/1627 - you might try updating to latest stable. This issue is about the HPA which does not sound like what you're running into. Hard to tell with limited information though.

edenreich commented 4 years ago

@brandond that was exactly what I meant, awesome good to know I'm going to try this out, I was in need for this to work in order to use HPA based on these metrics

stale[bot] commented 3 years ago

This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.