robusta-dev / robusta

Kubernetes observability and automation, with an awesome Prometheus integration
https://home.robusta.dev/
MIT License
2.57k stars 252 forks source link

Drops image label in container_memory_working_set_bytes metric #1435

Open wrbbz opened 4 months ago

wrbbz commented 4 months ago

In according to documentation, container_memory_working_set_bytes has only three labels (by default):

Thus, image label was deleted from query due to its possible absence

arikalon1 commented 4 months ago

hey @wrbbz

we filter on the label to prevent incorrectly doubling the metric (see stackoverflow below) https://stackoverflow.com/questions/69281327/why-container-memory-usage-is-doubled-in-cadvisor-metrics

Are you scraping this from cadvisor? What version? What is your prometheus distribution? (kube-prometheus-stack? other?) And do you have any metric_relabel_configs?

wrbbz commented 4 months ago

Yeah

I'm using Prometheus installed in k8s cluster. Prom version in 2.45.2. This metric has these labels: container_memory_working_set_bytes{container="", instance="", job="", namespace="", node="", pod="", scrape_endpoint="cadvisor", tier=""}

Also, there is a relabeling rules:

* Renamed labels:
  * pod_name -> pod
  * container_name -> container
* Dropped labels (id, image, name)
* Dropped metrics w/o labels pod, container or namespace

However, according to Kubernetes Metrics specification, container_memory_working_set_bytes has only three labels: image

Also, according to the answer on stack overflow, there would not be any doubling due to excluded metrics without container label:

sum(container_memory_working_set_bytes{namespace="$namespace", pod=~"$pod", container!=""}) by (pod, job)

and

sum(container_memory_working_set_bytes{namespace="$namespace", pod=~"$pod", container=~"$container"}) by (container, pod, job)

Part of the answer:

There are multiple ways to fix the problem. For example, you can exclude metrics without container name by using container!="" label filter.
wrbbz commented 4 months ago

@arikalon1, what do you think?