google / cadvisor

Analyzes resource usage and performance characteristics of running containers.
Other
17.24k stars 2.33k forks source link

"container" and "image" labels missing for Kata containers #2651

Open ghost opened 4 years ago

ghost commented 4 years ago

We using kubelet standalone mode trying to get metrics from /metrics/cadvisor api and have noticed from kata containers that there are no container and image labels present except for one pod !! Attaching is part of metrics from cadvisor.

container_cpu_usage_seconds_total{container="POD",cpu="total",endpoint="https-metrics",id="/kubepods/pod81712c4f-6027-4c04-8485-b8641fc80625/crio-a1b0c54963b1cb8eb75f088bde1ba5b0c09d4ef24c0b7a6d10cff10dd94e0727",instance="10.11.76.105:10250",job="kubelet",name="k8s_POD_mytestrunc-d947dc49-z695j_ns-team-aem_81712c4f-6027-4c04-8485-b8641fc80625_0",namespace="ns-team-aem",node="vmss-agent-kata0-sgttw000005",pod="mytestrunc-d947dc49-z695j",prometheus="monitoring/k8s-e",service="kubelet"}    0.349990968
container_cpu_usage_seconds_total{container="bash",cpu="total",endpoint="https-metrics",id="/kubepods/pod81712c4f-6027-4c04-8485-b8641fc80625/crio-0fe3548c789696d576b12275334c61165827a47cc3b27a2de6621574350fd50b",image="docker.io/jcrowthe/nets:latest",instance="10.11.76.105:10250",job="kubelet",name="k8s_bash_mytestrunc-d947dc49-z695j_ns-team-aem_81712c4f-6027-4c04-8485-b8641fc80625_0",namespace="ns-team-aem",node="vmss-agent-kata0-sgttw000005",pod="mytestrunc-d947dc49-z695j",prometheus="monitoring/k8s-e",service="kubelet"}   0.366889982
container_cpu_usage_seconds_total{cpu="total",endpoint="https-metrics",id="/kubepods/burstable/pod307b8fd4-4b48-4fc6-ac96-81e4f05474c6",instance="10.11.76.14:10250",job="kubelet",namespace="ns-team-aem",node="vmss-agent-kata0-sgttw000001",pod="tudorkata-aem-golden-publish-549cfc8bfb-9rkd6",prometheus="monitoring/k8s-e",service="kubelet"} 3775.761268429
container_cpu_usage_seconds_total{cpu="total",endpoint="https-metrics",id="/kubepods/burstable/pod5942b744-0bcd-4f54-b4eb-21052f0ed24c",instance="10.11.76.14:10250",job="kubelet",namespace="ns-team-aem",node="vmss-agent-kata0-sgttw000001",pod="tudorkata-aem-author-56f95444d9-wbdmk",prometheus="monitoring/k8s-e",service="kubelet"} 30718.671735542
container_cpu_usage_seconds_total{cpu="total",endpoint="https-metrics",id="/kubepods/burstable/podc8dc557a-0461-4619-ae0e-6ff0708751fc",instance="10.11.76.6:10250",job="kubelet",namespace="ns-team-aem",node="vmss-agent-kata0-sgttw000000",pod="tudorkata-aem-publish-d64fd569d-68nhq",prometheus="monitoring/k8s-e",service="kubelet"}  4482.130998142
container_cpu_usage_seconds_total{cpu="total",endpoint="https-metrics",id="/kubepods/pod15d5e3ec-617f-4140-9708-c4280c5cd6c6",instance="10.11.76.105:10250",job="kubelet",namespace="ns-team-aem",node="vmss-agent-kata0-sgttw000005",pod="mytest300m-d794566c5-vm28g",prometheus="monitoring/k8s-e",service="kubelet"} 285.760756734
container_cpu_usage_seconds_total{cpu="total",endpoint="https-metrics",id="/kubepods/pod4b20472c-99f4-41b9-a793-48c6530ec823",instance="10.11.76.105:10250",job="kubelet",namespace="ns-team-aem",node="vmss-agent-kata0-sgttw000005",pod="bash-kata-cfd44c8d7-kckr2",prometheus="monitoring/k8s-e",service="kubelet"}  3147.638311245
container_cpu_usage_seconds_total{cpu="total",endpoint="https-metrics",id="/kubepods/pod81712c4f-6027-4c04-8485-b8641fc80625",instance="10.11.76.105:10250",job="kubelet",namespace="ns-team-aem",node="vmss-agent-kata0-sgttw000005",pod="mytestrunc-d947dc49-z695j",prometheus="monitoring/k8s-e",service="kubelet"}  0.842201924
container_cpu_usage_seconds_total{cpu="total",endpoint="https-metrics",id="/kubepods/pod969c8239-4c5c-475f-b82e-16f05e17b9d7",instance="10.11.76.14:10250",job="kubelet",namespace="ns-team-aem",node="vmss-agent-kata0-sgttw000001",pod="mytest-5cf468d58c-wnrxn",prometheus="monitoring/k8s-e",service="kubelet"} 3227.464075355
container_cpu_usage_seconds_total{cpu="total",endpoint="https-metrics",id="/kubepods/podfe5b9da2-ad88-4ca2-b51e-90367b405642",instance="10.11.76.105:10250",job="kubelet",namespace="ns-team-aem",node="vmss-agent-kata0-sgttw000005",pod="bashkata-57795659d8-t4l59",prometheus="monitoring/k8s-e",service="kubelet"}  3186.125770263

As it can be seen that first TWO metrics from kata container have container and image label, but rest of the metrics don't have container and image labels. Could someone help to explain what's causing this inconsistency? Also, please note that this issue wasn't causing for Non-Kata containers !!

Version

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-16T00:04:31Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
dashpole commented 4 years ago

I'm surprised you are getting labels from any kata containers...

cAdvisor requires an in-tree integration for each container runtime. Obviously, this is not optimal, as we would prefer that runtimes be abstracted away by the container runtime interface...

We don't have an in-tree integration for KATA, so we can only get cgroup metrics, and no metadata.

We would need to implement kata here: https://github.com/google/cadvisor/tree/master/container

ghost commented 4 years ago

Thanks @dashpole for the detailed explanation !!

I'm surprised you are getting labels from any kata containers...

I too got surprised on seeing this at first, not sure why this inconsistent behaviour there for this pod: mytestrunc-d947dc49-z695j on having those container and image labels

We would need to implement kata here: https://github.com/google/cadvisor/tree/master/container

Curious to know, Is there are any timelines defined on when this implementation will be rolled out ? If yes, requesting to share the target dates

dashpole commented 4 years ago

We align our releases with kubernetes releases. 1.19 was just released, so if it is merged in the next few months, it would be released around the beginning of December. I don't personally have plans to implement it.

ghost commented 2 years ago

Hello @dashpole Could you please share the timeline status of this issue? thanks..

dashpole commented 2 years ago

I am no longer working on this project. The timeline would be based on when someone decides to contribute the feature. It may also be moot given https://github.com/kubernetes/enhancements/issues/2371