prometheus-operator / kube-prometheus

Use Prometheus to monitor Kubernetes and applications running on Kubernetes
https://prometheus-operator.dev/
Apache License 2.0
6.8k stars 1.94k forks source link

Incorrect Container Memory Consumption Graph Behavior When Pod is Restarted #2522

Open vladmalynych opened 2 months ago

vladmalynych commented 2 months ago

Problem:

The Grafana dashboards defined in grafana-dashboardDefinitions.yaml include graphs for memory consumption per pod. The memory consumption query currently used is:

https://github.com/prometheus-operator/kube-prometheus/blob/main/manifests/grafana-dashboardDefinitions.yaml#L8300

                  "targets": [
                      {
                          "datasource": {
                              "type": "prometheus",
                              "uid": "${datasource}"
                          },
                          "expr": "sum(container_memory_working_set_bytes{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\", container!=\"\", image!=\"\"}) by (container)",
                          "legendFormat": "__auto"
                      },
                      {
                          "datasource": {
                              "type": "prometheus",
                              "uid": "${datasource}"
                          },
                          "expr": "sum(\n    kube_pod_container_resource_requests{job=\"kube-state-metrics\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\", resource=\"memory\"}\n)\n",
                          "legendFormat": "requests"
                      },
                      {
                          "datasource": {
                              "type": "prometheus",
                              "uid": "${datasource}"
                          },
                          "expr": "sum(\n    kube_pod_container_resource_limits{job=\"kube-state-metrics\", cluster=\"$cluster\", namespace=\"$namespace\", pod=\"$pod\", resource=\"memory\"}\n)\n",
                          "legendFormat": "limits"
                      }
                  ],
                  "title": "Memory Usage (WSS)",
                  "type": "timeseries"
              },

When a pod is restarted, the current query adds memory usage data from both the old and new containers simultaneously. This can lead to temporary spikes in the displayed memory consumption. As a result, the dashboard may show memory usage that exceeds the container's memory limit, even though the actual memory consumption is within the limit.

Screenshot 2024-09-17 at 14 20 49 Screenshot 2024-09-17 at 14 22 55 (1)

Steps to Reproduce:

froblesmartin commented 3 weeks ago

Hi! I also found this issue, but I am thinking that it may not be an issue on the dashboard, but on the metric itself or in the scrapping configuration, no? 🤔 For me I still see the previous container run for 4:30 minutes (comparing when the new run started and the metrics from the previous one disappear).

I would expect this metric only to show, as it is described in the official documentation, the

Current working set of the container in bytes

but as seen, it also includes the memory from a container that is not currently running anymore.

May it be a misconfiguration in the metrics scrapper?

And the problem is that, for a dashboard showing a single pod, this is viable showing the different containers, but what about a dashboard showing the total memory usage in the cluster? If you still sum the different container instances for the same container you will be displaying something wrong. 🤔