Open johnswarbrick-napier opened 4 months ago
You'd need to use metricRelablings in the kubelet service monitor to filter timeseries by their namespace label. See https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/running-exporters.md#metric-relabeling for an example.
This issue has been automatically marked as stale because it has not had any activity in the last 60 days. Thank you for your contributions.
Is there an existing issue for this?
What happened?
I am deploying Prometheus Operator into a shared Kubernetes cluster with a large number of namespaces.
However I only want to discover resources and receive alerts for a small number of explicitly listed namespaces.
I have configured the Prometheus Operator to only discover resources in a given namespace list:
This works fine for limiting the discovery of ServiceMonitors.
However I am received Prometheus defaultRules alerts for other namespaces, not included on the namespace list, for example
CPUThrottlingHigh
which uses this default Prometheus rule:sum by (cluster, container, pod, namespace) (increase(container_cpu_cfs_throttled_periods_total{container!=""}[5m])) / sum by (cluster, container, pod, namespace) (increase(container_cpu_cfs_periods_total[5m])) > (25 / 100)
The defaultRules alerts being fired all seem to be related to metrics obtained by the Prometheus Operator from Kubelet.
I think the problem is that Kubelet is deployed and managed by Prometheus Operator, but the metrics received by Kubelet are not filtered to the explicit list of namespaces that I provided to the Prometheus Operator.
How can I restrict the Kubelet metrics so they are only scraped or stored from the specific namespaces that I listed into the Prometheus Operator configuration?
Prometheus Operator Version
Kubernetes Version
Kubernetes Cluster Type
AKS
How did you deploy Prometheus-Operator?
helm chart:prometheus-community/kube-prometheus-stack
Manifests
No response
prometheus-operator log output
Anything else?
No response