Closed wglambert closed 3 years ago
Metrics server is already included with v1.0.0-rc1, if kubectl top nodes
works that is a good indication that it is working.
Looks like you need to include a namespace with kubectl top pod
, eg kubectl top pod -A
.
$ kubectl top pod -A
W1114 08:58:41.766611 3134 top_pod.go:266] Metrics not available for pod default/nginx-7bfff5fd9f-rbklh, age: 16h42m20.766604134s
error: Metrics not available for pod default/nginx-7bfff5fd9f-rbklh, age: 16h42m20.766604134s
On v0.10.2
in a virtualbox vm
$ k3s -v
k3s version v0.10.2 (8833bfd9)
$ kubectl top no
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k3s-1 235m 5% 1323Mi 19%
$ kubectl top po -A
W1114 09:28:44.143327 20025 top_pod.go:259] Metrics not available for pod default/php-apache-79544c9bd9-lqvp7, age: 6m1.143318282s
error: Metrics not available for pod default/php-apache-79544c9bd9-lqvp7, age: 6m1.143318282s
On v1.0.0-rc3
without installing kube-metrics
$ k3s -v
k3s version v1.0.0-rc3 (4a267279)
$ kubectl top no
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k3s-1 295m 7% 1422Mi 21%
$ kubectl top po -A
W1114 09:39:29.892289 31589 top_pod.go:259] Metrics not available for pod default/php-apache-79544c9bd9-s5x2z, age: 2m18.892281016s
error: Metrics not available for pod default/php-apache-79544c9bd9-s5x2z, age: 2m18.892281016s
What OS are you using?
Does curl https://raw.githubusercontent.com/rancher/k3s/master/contrib/util/check-config.sh | sh -
give any helpful info about missing kernel modules?
On the VM Ubuntu 18.04.1 On the host (Used for the first example) 18.04.3
In the VM
On the host
I've just tested this on a simple baremetal install and the metrics are able to fetch the pod information and the horizontal pod autoscaler works fine using the same methodology as I did in k3s
$ kubectl top no
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
ayanami-clone 339m 8% 1412Mi 21%
$ kubectl top po
NAME CPU(cores) MEMORY(bytes)
php-apache-79544c9bd9-n8lqt 1m 9Mi
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0%/50% 1 10 1 103m
Doing the load generation from https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#increase-load
$ kubectl run -i --tty load-generator --image=busybox /bin/sh
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
/ # while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
OK!OK!OK!OK!OK!OK!OK!OK!OK!OK
. . .
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 250%/50% 1 10 5 107m
$ kubectl top po -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
default load-generator-5fb4fb465b-fjxsr 6m 1Mi
default php-apache-79544c9bd9-jwwq7 94m 12Mi
default php-apache-79544c9bd9-n8lqt 104m 12Mi
default php-apache-79544c9bd9-tpcdq 95m 12Mi
default php-apache-79544c9bd9-vnpcx 107m 12Mi
default php-apache-79544c9bd9-z89d9 94m 12Mi
kube-system coredns-5644d7b6d9-bc946 7m 7Mi
kube-system coredns-5644d7b6d9-jthlm 7m 7Mi
kube-system etcd-ayanami-clone 19m 25Mi
kube-system kube-apiserver-ayanami-clone 44m 264Mi
kube-system kube-controller-manager-ayanami-clone 17m 38Mi
kube-system kube-proxy-9wfvb 1m 11Mi
kube-system kube-scheduler-ayanami-clone 2m 12Mi
kube-system metrics-server-7557fbfb7d-r4zbh 2m 12Mi
kube-system weave-net-s4wzt 1m 57Mi
$ kubectl top po
NAME CPU(cores) MEMORY(bytes)
load-generator-5fb4fb465b-fjxsr 6m 1Mi
php-apache-79544c9bd9-jwwq7 94m 12Mi
php-apache-79544c9bd9-n8lqt 104m 12Mi
php-apache-79544c9bd9-tpcdq 95m 12Mi
php-apache-79544c9bd9-vnpcx 107m 12Mi
php-apache-79544c9bd9-z89d9 94m 12Mi
Would you mind trying k3s v1.0.0-rc4? That upgrades to k8s 1.16.3 which appears to have some fixes for metrics-server.
Same issue
$ k3s -v
k3s version v1.0.0-rc4 (fe4b9caf)
$ kubectl top no
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
ayanami 836m 10% 4704Mi 40%
$ kubectl top po
W1115 14:58:50.238592 5960 top_pod.go:266] Metrics not available for pod default/php-apache-79544c9bd9-z8wsv, age: 5m19.238587172s
error: Metrics not available for pod default/php-apache-79544c9bd9-z8wsv, age: 5m19.238587172s
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache <unknown>/50% 1 10 1 5m16s
I would like to chime in -- I've installed a k3s cluster with three x86 agent nodes and a raspberry pi as the master. The agents I installed with:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="agent --node-label=istio=enabled --token-file=/home/cfox/kube/token.txt --server https://knodemaster.localdomain:6443" sh -
When I try 'top node' I get this:
$ kubectl top node
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
(top pod returns an identical error)
Some more troubleshooting
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
php-apache-79544c9bd9-7bgm9 1/1 Running 0 108m
$ kubectl top po --v=10
I1205 16:42:40.640845 12509 loader.go:375] Config loaded from file: /etc/rancher/k3s/k3s.yaml
I1205 16:42:40.684173 12509 round_trippers.go:423] curl -k -v -XGET -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" -H "Authorization: Basic YWRtaW46ODUzYTIxOTZlNThmOWJiYmU3YWZiYWQ3YWU3Y2YwYjM=" -H "Accept: application/json, */*" 'https://127.0.0.1:6443/api?timeout=32s'
I1205 16:42:40.700495 12509 round_trippers.go:443] GET https://127.0.0.1:6443/api?timeout=32s 200 OK in 16 milliseconds
I1205 16:42:40.700546 12509 round_trippers.go:449] Response Headers:
I1205 16:42:40.700557 12509 round_trippers.go:452] Cache-Control: no-cache, private
I1205 16:42:40.700567 12509 round_trippers.go:452] Content-Type: application/json
I1205 16:42:40.700575 12509 round_trippers.go:452] Date: Fri, 06 Dec 2019 00:42:40 GMT
I1205 16:42:40.700583 12509 round_trippers.go:452] Content-Length: 135
I1205 16:42:40.700670 12509 request.go:968] Response Body: {"kind":"APIVersions","versions":["v1"],"serverAddressByClientCIDRs":[{"clientCIDR":"0.0.0.0/0","serverAddress":"172.29.0.218:6443"}]}
I1205 16:42:40.700993 12509 round_trippers.go:423] curl -k -v -XGET -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" -H "Authorization: Basic YWRtaW46ODUzYTIxOTZlNThmOWJiYmU3YWZiYWQ3YWU3Y2YwYjM=" -H "Accept: application/json, */*" 'https://127.0.0.1:6443/apis?timeout=32s'
I1205 16:42:40.701604 12509 round_trippers.go:443] GET https://127.0.0.1:6443/apis?timeout=32s 200 OK in 0 milliseconds
I1205 16:42:40.701636 12509 round_trippers.go:449] Response Headers:
I1205 16:42:40.701643 12509 round_trippers.go:452] Content-Type: application/json
I1205 16:42:40.701649 12509 round_trippers.go:452] Date: Fri, 06 Dec 2019 00:42:40 GMT
I1205 16:42:40.701655 12509 round_trippers.go:452] Cache-Control: no-cache, private
I1205 16:42:40.701781 12509 request.go:968] Response Body: {"kind":"APIGroupList","apiVersion":"v1","groups":[{"name":"apiregistration.k8s.io","versions":[{"groupVersion":"apiregistration.k8s.io/v1","version":"v1"},{"groupVersion":"apiregistration.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"apiregistration.k8s.io/v1","version":"v1"}},{"name":"extensions","versions":[{"groupVersion":"extensions/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"extensions/v1beta1","version":"v1beta1"}},{"name":"apps","versions":[{"groupVersion":"apps/v1","version":"v1"}],"preferredVersion":{"groupVersion":"apps/v1","version":"v1"}},{"name":"events.k8s.io","versions":[{"groupVersion":"events.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"events.k8s.io/v1beta1","version":"v1beta1"}},{"name":"authentication.k8s.io","versions":[{"groupVersion":"authentication.k8s.io/v1","version":"v1"},{"groupVersion":"authentication.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"authentication.k8s.io/v1","version":"v1"}},{"name":"authorization.k8s.io","versions":[{"groupVersion":"authorization.k8s.io/v1","version":"v1"},{"groupVersion":"authorization.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"authorization.k8s.io/v1","version":"v1"}},{"name":"autoscaling","versions":[{"groupVersion":"autoscaling/v1","version":"v1"},{"groupVersion":"autoscaling/v2beta1","version":"v2beta1"},{"groupVersion":"autoscaling/v2beta2","version":"v2beta2"}],"preferredVersion":{"groupVersion":"autoscaling/v1","version":"v1"}},{"name":"batch","versions":[{"groupVersion":"batch/v1","version":"v1"},{"groupVersion":"batch/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"batch/v1","version":"v1"}},{"name":"certificates.k8s.io","versions":[{"groupVersion":"certificates.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"certificates.k8s.io/v1beta1","version":"v1beta1"}},{"name":"networking.k8s.io","versions":[{"groupVersion":"networking.k8s.io/v1","version":"v1"},{"groupVersion":"networking.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"networking.k8s.io/v1","version":"v1"}},{"name":"policy","versions":[{"groupVersion":"policy/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"policy/v1beta1","version":"v1beta1"}},{"name":"rbac.authorization.k8s.io","versions":[{"groupVersion":"rbac.authorization.k8s.io/v1","version":"v1"},{"groupVersion":"rbac.authorization.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"rbac.authorization.k8s.io/v1","version":"v1"}},{"name":"storage.k8s.io","versions":[{"groupVersion":"storage.k8s.io/v1","version":"v1"},{"groupVersion":"storage.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"storage.k8s.io/v1","version":"v1"}},{"name":"admissionregistration.k8s.io","versions":[{"groupVersion":"admissionregistration.k8s.io/v1","version":"v1"},{"groupVersion":"admissionregistration.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"admissionregistration.k8s.io/v1","version":"v1"}},{"name":"apiextensions.k8s.io","versions":[{"groupVersion":"apiextensions.k8s.io/v1","version":"v1"},{"groupVersion":"apiextensions.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"apiextensions.k8s.io/v1","version":"v1"}},{"name":"scheduling.k8s.io","versions":[{"groupVersion":"scheduling.k8s.io/v1","version":"v1"},{"groupVersion":"scheduling.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"scheduling.k8s.io/v1","version":"v1"}},{"name":"coordination.k8s.io","versions":[{"groupVersion":"coordination.k8s.io/v1","version":"v1"},{"groupVersion":"coordination.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"coordination.k8s.io/v1","version":"v1"}},{"name":"node.k8s.io","versions":[{"groupVersion":"node.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"node.k8s.io/v1beta1","version":"v1beta1"}},{"name":"helm.cattle.io","versions":[{"groupVersion":"helm.cattle.io/v1","version":"v1"}],"preferredVersion":{"groupVersion":"helm.cattle.io/v1","version":"v1"}},{"name":"k3s.cattle.io","versions":[{"groupVersion":"k3s.cattle.io/v1","version":"v1"}],"preferredVersion":{"groupVersion":"k3s.cattle.io/v1","version":"v1"}},{"name":"metrics.k8s.io","versions":[{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}}]}
I1205 16:42:40.702285 12509 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" -H "Authorization: Basic YWRtaW46ODUzYTIxOTZlNThmOWJiYmU3YWZiYWQ3YWU3Y2YwYjM=" 'https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1/namespaces/default/pods'
I1205 16:42:40.703922 12509 round_trippers.go:443] GET https://127.0.0.1:6443/apis/metrics.k8s.io/v1beta1/namespaces/default/pods 200 OK in 1 milliseconds
I1205 16:42:40.703940 12509 round_trippers.go:449] Response Headers:
I1205 16:42:40.703948 12509 round_trippers.go:452] Cache-Control: no-cache, private
I1205 16:42:40.703955 12509 round_trippers.go:452] Content-Length: 154
I1205 16:42:40.703962 12509 round_trippers.go:452] Content-Type: application/json
I1205 16:42:40.703969 12509 round_trippers.go:452] Date: Fri, 06 Dec 2019 00:42:40 GMT
I1205 16:42:40.704007 12509 request.go:968] Response Body: {"kind":"PodMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods"},"items":[]}
I1205 16:42:40.704803 12509 round_trippers.go:423] curl -k -v -XGET -H "Authorization: Basic YWRtaW46ODUzYTIxOTZlNThmOWJiYmU3YWZiYWQ3YWU3Y2YwYjM=" -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" 'https://127.0.0.1:6443/api/v1/namespaces/default/pods'
I1205 16:42:40.707003 12509 round_trippers.go:443] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
I1205 16:42:40.707024 12509 round_trippers.go:449] Response Headers:
I1205 16:42:40.707031 12509 round_trippers.go:452] Content-Type: application/json
I1205 16:42:40.707037 12509 round_trippers.go:452] Date: Fri, 06 Dec 2019 00:42:40 GMT
I1205 16:42:40.707043 12509 round_trippers.go:452] Cache-Control: no-cache, private
I1205 16:42:40.707105 12509 request.go:968] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/default/pods","resourceVersion":"5355"},"items":[{"metadata":{"name":"php-apache-79544c9bd9-7bgm9","generateName":"php-apache-79544c9bd9-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/php-apache-79544c9bd9-7bgm9","uid":"6c7e23da-1c50-440b-837a-9c176c0a8532","resourceVersion":"451","creationTimestamp":"2019-12-05T22:54:37Z","labels":{"pod-template-hash":"79544c9bd9","run":"php-apache"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"php-apache-79544c9bd9","uid":"4048c362-6ef0-4088-bc65-426a4c1806c9","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-j9gjl","secret":{"secretName":"default-token-j9gjl","defaultMode":420}}],"containers":[{"name":"php-apache","image":"k8s.gcr.io/hpa-example","ports":[{"containerPort":80,"protocol":"TCP"}],"resources":{"limits":{"cpu":"500m"},"requests":{"cpu":"200m"}},"volumeMounts":[{"name":"default-token-j9gjl","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"ayanami","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-12-05T22:54:37Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-12-05T22:54:52Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-12-05T22:54:52Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2019-12-05T22:54:37Z"}],"hostIP":"172.29.0.218","podIP":"10.42.0.6","podIPs":[{"ip":"10.42.0.6"}],"startTime":"2019-12-05T22:54:37Z","containerStatuses":[{"name":"php-apache","state":{"running":{"startedAt":"2019-12-05T22:54:51Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/hpa-example:latest","imageID":"docker-pullable://k8s.gcr.io/hpa-example@sha256:581697a37f0e136db86d6b30392f0db40ce99c8248a7044c770012f4e8491544","containerID":"docker://8fb06b7db1dd94d78f0dd1bc6768210d41ab901e64fdcce012d35095f3c72301","started":true}],"qosClass":"Burstable"}}]}
W1205 16:42:40.717320 12509 top_pod.go:266] Metrics not available for pod default/php-apache-79544c9bd9-7bgm9, age: 1h48m3.717310451s
F1205 16:42:40.717355 12509 helpers.go:114] error: Metrics not available for pod default/php-apache-79544c9bd9-7bgm9, age: 1h48m3.717310451s
So it does a curl -k -v . . . 'https://127.0.0.1:6443/api/v1/namespaces/default/pods'
and outputs the metrics in the response body, but says there's no metrics
Whereas parsing the json in the that response body gives you the metrics
$ curl -k -v -XGET -H "Authorization: Basic YWRtaW46ODUzYTIxOTZlNThmOWJiYmU3YWZiYWQ3YWU3Y2YwYjM=" -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.16.1 (linux/amd64) kubernetes/d647ddb" 'https://127.0.0.1:6443/api/v1/namespaces/default/pods' 2>/dev/null | jq .items[].spec.containers[].resources
{
"limits": {
"cpu": "500m"
},
"requests": {
"cpu": "200m"
}
}
The only real difference I noticed with these requests between the kubeadm install and k3s is that k3s uses v1
and authorization whereas the kubeadm install uses v1beta1
Working kubeadm install:
Uses curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.16.3 (linux/amd64) kubernetes/b3cbbae" 'https://172.29.0.90:6443/apis/metrics.k8s.io/v1beta1/namespaces/default/pods'
Is this an issue with the kubectl version? Does k3s kubectl
work?
$ k3s kubectl top po
W1205 17:14:51.706941 30517 top_pod.go:266] Metrics not available for pod default/php-apache-79544c9bd9-7bgm9, age: 2h20m14.706931617s
error: Metrics not available for pod default/php-apache-79544c9bd9-7bgm9, age: 2h20m14.706931617s
$ k3s kubectl get po
NAME READY STATUS RESTARTS AGE
php-apache-79544c9bd9-7bgm9 1/1 Running 0 140m
I've got same problem. #1149 Everything is ok, but metrics cannot get pod's values of cpu and memory. Version: k3s version v1.0.0 (18bd921)
@erikwilson I just found out that the issue happens (to me at least) only when running in docker mode. export INSTALL_K3S_EXEC="--docker" Otherwise it works.
Does that gives you any pointers?
My docker version is 19.03.1.
The problem is, that I must use Docker mode... :(
Same issue:
pi@mastercube:~ $ k3s -v
k3s version v1.0.0 (18bd921c
pi@mastercube:~ $ kubectl top node
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
Also some logs that might help
Dec 11 12:23:42 mastercube k3s[653]: W1211 12:23:42.226540 653 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
Dec 11 12:23:44 mastercube k3s[653]: time="2019-12-11T12:23:44.786528541+01:00" level=error msg="failed to process config: failed to process /var/lib/rancher/k3s/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.43.0.10\": provided IP is already allocated"
Dec 11 12:23:46 mastercube k3s[653]: E1211 12:23:46.720120 653 available_controller.go:416] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: 401
pi@mastercube:~ $ kubectl get service --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d18h
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5d18h
kube-system metrics-server ClusterIP 10.43.171.209 <none> 443/TCP 128m
kube-system traefik LoadBalancer 10.43.61.120 192.168.0.180 80:30901/TCP,443:31663/TCP,8080:31151/TCP 5d18h
pi@mastercube:~ $ kubectl get apiservices | grep metrics
v1beta1.metrics.k8s.io kube-system/metrics-server False (FailedDiscoveryCheck) 130m
Status:
Conditions:
Last Transition Time: 2019-12-11T09:19:13Z
Message: failing or missing response from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: 401
Reason: FailedDiscoveryCheck
Status: False
Type: Available
Events: <none>
pi@mastercube:~ $ kubectl logs -n kube-system metrics-server-6d684c7b5-thg2b
ect with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:15.292780 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.231467 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.231586 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.231787 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.239085 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
E1211 13:55:18.244691 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
Well after lots of time solution was adding the cn kubernetes-proxy to allowed, as described here: https://github.com/kubernetes-sigs/metrics-server/issues/292
pi@mastercube:~ $ kubectl top po
NAME CPU(cores) MEMORY(bytes)
nfs-client-provisioner-6cf568d56b-nsnnj 6m 5Mi
Same issue at k3s version v1.0.0 --docker mode , kubectl top node is ok ,kubectl top pod is error: Metrics not available for pod
Same issue:
pi@mastercube:~ $ k3s -v k3s version v1.0.0 (18bd921c pi@mastercube:~ $ kubectl top node Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
Also some logs that might help
Dec 11 12:23:42 mastercube k3s[653]: W1211 12:23:42.226540 653 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] Dec 11 12:23:44 mastercube k3s[653]: time="2019-12-11T12:23:44.786528541+01:00" level=error msg="failed to process config: failed to process /var/lib/rancher/k3s/server/manifests/coredns.yaml: failed to create kube-system/kube-dns /v1, Kind=Service for kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIP: Invalid value: \"10.43.0.10\": provided IP is already allocated" Dec 11 12:23:46 mastercube k3s[653]: E1211 12:23:46.720120 653 available_controller.go:416] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: 401
pi@mastercube:~ $ kubectl get service --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d18h default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5d18h kube-system metrics-server ClusterIP 10.43.171.209 <none> 443/TCP 128m kube-system traefik LoadBalancer 10.43.61.120 192.168.0.180 80:30901/TCP,443:31663/TCP,8080:31151/TCP 5d18h
pi@mastercube:~ $ kubectl get apiservices | grep metrics v1beta1.metrics.k8s.io kube-system/metrics-server False (FailedDiscoveryCheck) 130m
Status: Conditions: Last Transition Time: 2019-12-11T09:19:13Z Message: failing or missing response from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: bad status from https://10.43.171.209:443/apis/metrics.k8s.io/v1beta1: 401 Reason: FailedDiscoveryCheck Status: False Type: Available Events: <none>
pi@mastercube:~ $ kubectl logs -n kube-system metrics-server-6d684c7b5-thg2b ect with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority] E1211 13:55:15.292780 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority] E1211 13:55:18.231467 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority] E1211 13:55:18.231586 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority] E1211 13:55:18.231787 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority] E1211 13:55:18.239085 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority] E1211 13:55:18.244691 1 authentication.go:65] Unable to authenticate the request due to an error: [x509: subject with cn=kubernetes-proxy is not in the allowed list, x509: certificate signed by unknown authority]
Well after lots of time solution was adding the cn kubernetes-proxy to allowed, as described here: kubernetes-sigs/metrics-server#292
pi@mastercube:~ $ kubectl top po NAME CPU(cores) MEMORY(bytes) nfs-client-provisioner-6cf568d56b-nsnnj 6m 5Mi
dot you run k3s server in docker mode or not?
Same issue with k3s --docker mode. Any progress on this subjetc since 2 months? Thx.
@pierdobauce it's hard to say who is at blame here but I am thinking it's more on metrics-server than k3s. There's a bunch of issues that were closed recently (not resolved) on their GH issue tracker.
I think there should be a note in the docs that metrics-server is currently not supported when using Docker CRI or something.
For those looking this may help you: https://github.com/kubernetes-sigs/metrics-server/issues/349
Is this still an issue for you in v1.18.2 @wglambert? Looks like it was fixed as part of https://github.com/rancher/k3s/issues/1554
It seems to be working fine, following the docs here https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#create-horizontal-pod-autoscaler
The php-apache
deployment is able to return integer metrics, but trying it on a lone nginx pod doesn't work?
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx Deployment/nginx <unknown>/50% 1 10 1 5m58s
php-apache Deployment/php-apache 0%/50% 1 10 1 2m54s
journalctl
logs for that nginx HPA:
failed to get cpu utilization: missing request for cpu
type: 'Warning' reason: 'FailedGetResourceMetric' missing request for cpu
type: 'Warning' reason: 'FailedComputeMetricsReplicas' invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: missing request for cpu
But a kubectl top pod
displays numbers for the CPU and MEM
$ k3s -v
k3s version v1.18.2+k3s1 (698e444a)
$ kubectl top po
NAME CPU(cores) MEMORY(bytes)
nginx-f89759699-k7tzh 0m 2Mi
php-apache 1m 9Mi
But it definitely works, this issue might just be an oddity with how that nginx was deployed?
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
nginx Deployment/nginx <unknown>/50% 1 10 1 14m
php-apache Deployment/php-apache 251%/50% 1 10 6 11m
It looks like you didn't specify a cpu resource request for that pod. 'top pods' only cares what it's using, while the HPA wants to compare utilization to requests to figure out if it needs to scale up or down.
I've gone through and added cpu and memory requests to all my pods. Even if you're not using limits or autoscaling, the scheduler works better if it knows what to expect. The QoS stuff also requires requests (and limits, if you want to use the guaranteed class).
ping - any update on this topic ? still following.. I'm on v1.17.4, and having the same issue:
top_pod.go:274] Metrics not available for pod x
with nodes everything works fine I can get the usage without changing anything.
@edenreich If your issue is with kubectl top pods
when using Docker, I believe this was fixed in https://github.com/rancher/k3s/pull/1627 - you might try updating to latest stable. This issue is about the HPA which does not sound like what you're running into. Hard to tell with limited information though.
@brandond that was exactly what I meant, awesome good to know I'm going to try this out, I was in need for this to work in order to use HPA based on these metrics
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions.
Version:
k3s install
```console $ curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.0.0-rc1 INSTALL_K3S_EXEC="server" sh -s - --docker --kube-apiserver-arg=enable-admission-plugins=LimitRanger [INFO] Using v1.0.0-rc1 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.0.0-rc1/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.0.0-rc1/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Skipping /usr/local/bin/kubectl symlink to k3s, command exists in PATH at /usr/bin/kubectl [INFO] Skipping /usr/local/bin/crictl symlink to k3s, command exists in PATH at /usr/bin/crictl [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s $ export KUBECONFIG=$(locate k3s.yaml) $ sudo chmod 777 $KUBECONFIG ```Describe the bug Pods won't display metrics with kube-metrics installed from https://github.com/kubernetes-sigs/metrics-server Horizontal Pod Autoscaling also doesn't resolve metrics, however
kubectl top nodes
resolves fineTo Reproduce
Install and configure kube-metrics
```console $ git clone https://github.com/kubernetes-incubator/metrics-server Cloning into 'metrics-server'... remote: Enumerating objects: 4, done. remote: Counting objects: 100% (4/4), done. remote: Compressing objects: 100% (4/4), done. remote: Total 11349 (delta 0), reused 1 (delta 0), pack-reused 11345 Receiving objects: 100% (11349/11349), 12.18 MiB | 7.15 MiB/s, done. Resolving deltas: 100% (5912/5912), done. $ kubectl apply -f metrics-server/deploy/1.8+/ clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created serviceaccount/metrics-server created deployment.apps/metrics-server created service/metrics-server created clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created $ kubectl edit deploy -n kube-system metrics-server deployment.apps/metrics-server edited $ kubectl get deploy -n kube-system metrics-server -o json | jq .spec.template.spec.containers[].args [ "--cert-dir=/tmp", "--secure-port=4443", "--kubelet-insecure-tls=true", "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname" ] ```Verify functionality
```console $ kubectl get apiservices NAME SERVICE AVAILABLE AGE v1. Local True 3m50s v1.admissionregistration.k8s.io Local True 3m50s v1beta1.admissionregistration.k8s.io Local True 3m50s v1beta1.apiextensions.k8s.io Local True 3m50s v1.apiextensions.k8s.io Local True 3m50s v1.apps Local True 3m50s v1beta1.authentication.k8s.io Local True 3m50s v1.authentication.k8s.io Local True 3m50s v1.authorization.k8s.io Local True 3m50s v2beta1.autoscaling Local True 3m50s v1beta1.authorization.k8s.io Local True 3m50s v2beta2.autoscaling Local True 3m50s v1.batch Local True 3m50s v1.autoscaling Local True 3m50s v1beta1.batch Local True 3m50s v1beta1.certificates.k8s.io Local True 3m50s v1.coordination.k8s.io Local True 3m50s v1beta1.coordination.k8s.io Local True 3m50s v1beta1.events.k8s.io Local True 3m50s v1.networking.k8s.io Local True 3m50s v1beta1.extensions Local True 3m50s v1beta1.networking.k8s.io Local True 3m50s v1beta1.policy Local True 3m50s v1.rbac.authorization.k8s.io Local True 3m50s v1beta1.node.k8s.io Local True 3m50s v1beta1.scheduling.k8s.io Local True 3m50s v1beta1.rbac.authorization.k8s.io Local True 3m50s v1.scheduling.k8s.io Local True 3m50s v1.storage.k8s.io Local True 3m50s v1beta1.storage.k8s.io Local True 3m50s v1.k3s.cattle.io Local True 3m24s v1.helm.cattle.io Local True 3m24s v1beta1.metrics.k8s.io kube-system/metrics-server True 93s $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE local-path-provisioner-58fb86bdfd-6czp5 1/1 Running 0 3m46s coredns-d798c9dd-x782x 1/1 Running 0 3m46s helm-install-traefik-fsj6q 0/1 Completed 0 3m46s traefik-65bccdc4bd-pcmbb 1/1 Running 0 2m20s svclb-traefik-fbtj7 3/3 Running 0 2m20s metrics-server-b5655b66c-gjt75 1/1 Running 0 76s $ kubectl top no NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ayanami 765m 9% 9449Mi 80% ```Using https://www.digitalocean.com/community/tutorials/how-to-autoscale-your-workloads-on-digitalocean-kubernetes as a reference
Create a deployment with resource limits
```console $ kubectl create deployment nginx --image=nginx deployment.apps/nginx created $ kubectl edit deploy nginx deployment.apps/nginx edited $ kubectl get deploy nginx -o json | jq .spec.template.spec.containers[].resources { "limits": { "cpu": "300m" }, "requests": { "cpu": "100m", "memory": "250Mi" } } ```Expected behavior For pod metrics to be displayed, the node metrics work fine
Actual behavior Some error snippets:
Create an HPA and test pod metrics
```console $ kubectl autoscale deploy nginx --min=1 --max=5 --cpu-percent=50 horizontalpodautoscaler.autoscaling/nginx autoscaled $ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx Deployment/nginxAlso following https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#create-horizontal-pod-autoscaler
Walkthrough autoscale
```console $ kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=200m --limits=cpu=500m --expose --port=80 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. service/php-apache created deployment.apps/php-apache created $ kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 horizontalpodautoscaler.autoscaling/php-apache autoscaled $ kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx Deployment/nginx