Closed ohardy closed 4 years ago
For the time being, Kubebox retrieves resources usage metrics from the Kubelet embedded cAdvisor, because of kubernetes/kubernetes#56297.
It seems in your case, the cAdvisor port is not accessible, or the cAdvisor endpoint is unavailable for some reasons. What Kubernetes version and setup are you using?
As the Kubelet cAdvisor port is about to be deprecated kubernetes/kubernetes#56523, and proxying the Kubelet detailed stats endpoint requires cluster admin role when RBAC is enabled, I'm currently working on migrating Kubebox to using https://github.com/kubernetes-incubator/metrics-server for which I'm prototyping support for short-term historical metrics API.
Hi,
I use last version with metrics-server
, so yes it's normal ;)
I will wait for your new release :)
Let me re-open this so that we can track improvements.
As per the discussion in https://github.com/kubernetes-incubator/metrics-server/pull/62, an historical metrics API should be designed to standardize access to that kind of data.
kubernetes/kubernetes#56297 has been fixed, so the charts should be available with the next Kubebox release containing a5f144ff280b1948be7fb3fe7d2f119c288b3100.
I leave that issue open, to monitor progress toward a better integration with long term Kubernetes monitoring pipeline.
I'm still seeing this with 0.3.2 when running locally or inside the cluster.
I'm running Kubernetes 1.11.3 via Rancher RKE. The metrics-server is running:
metrics-server-97bc649d5-ncl8b 1/1 Running 0 41m
Kubebox is running in its own namespace (utilities
), and the default
serviceAccount in that namespace has been added to the cluster-admin
role:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: 2018-11-14T10:40:57Z
name: utilities
resourceVersion: "26317512"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/utilities
uid: c50e5370-e7f9-11e8-b128-eea99a463970
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: utilities
According to the docs, this should be all we need to access the metrics server, but it still reports that the metrics are unavailable.
Is there a way to dig further into what kubebox
is doing, or why it believes that the metrics are not available? Nothing useful is being logged by kubebox
or metrics-server
.
@oskapt The current version of Kubebox gets the resources usage data from the detailed stat endpoint on the Kubelet, which exposes data from cAdvisor. Accessing that endpoint generally requires cluster admin permission.
The exact endpoint called is: https://github.com/astefanutti/kubebox/blob/4a80d34e9b627e959bd56bf552f077af45938f99/lib/client.js#L220
So the caller has to have permission to proxy the node kubelet. You could try something like:
$ kubectl --raw /api/v1/nodes/${node}/proxy/stats/${namespace}/${pod}/${uid}/${container}
Otherwise you can check the master API or kubelet logs.
I plan to refactor the retrieval of resource usage metrics to get the data from the metrics API. However this requires work to design an historical API. The metrics server implements the metrics API which only exposes the latest data points.
I apologize - I misunderstood your reply on Sep 26 to say that the latest release supported the metrics-server.
After looking into it further today, it appears that I'm hitting this issue, which was opened by you in Nov 2017. It looks like that has been fixed in v1.12, and I'm still on v1.11.3.
I'll revisit this after an upgrade to 1.12.
ok - i can confirm that with 1.12 this is working as expected for CPU and memory stats. is still reports network information is unavailable, and when i query the endpoint directly, i see that it's all 0
for every field. this has nothing to do with kubebox, so i'll go digging again and report back with anything i find. if you have any suggestions on where to look, that would be helpful.
thanks again for making this - it's quite useful for gathering data quickly.
@oskapt thanks for the feedback. Network monitoring is problematic and I haven't found a robust way to retrieve the data. I should probably disable the tab in the short term until we clear that out.
Hi, the newer version of kubebox 0.3.2 is showing "Resources usage metrics unavailable". It was fixed in version 0.3.1 windows binary.
@vlinx It may be caused by kubernetes/kubernetes#56297 which has been fixed in Kubernetes 1.12. Which Kubernetes version do you use?
This is the version I am using right now - kubernetes with KOPS on AWS.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.8", GitCommit:"7eab6a49736cc7b01869a15f9f05dc5b49efb9fc", GitTreeState:"clean", BuildDate:"2018-09-14T15:54:20Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
@vlinx version 1.10.8 doesn't contain the fix for kubernetes/kubernetes#56297. I removed the work-around for it in Kubebox in version 0.3.1, as kubernetes/kubernetes#62544 was considered for back port to 1.9 up to 1.11. So until you can upgrade your Kubernetes server, one option is to use Kubebox 0.3.0.
Starting the 0.8.0 upcoming version, Kubebox will rely on cAdvisor, deployed as a DaemonSet, to retrieve the resource usage metrics. It's been documented in 91a2a265e5f98102f4efd23810c3122d7f849a41. This will provide a richer and more portable source for container metrics.
Hi, i got this error when i try to have metrics.
The error returned from the API server with last version of kubernetes:
Everything else works.
Do you know why ?