Open rca0 opened 5 years ago
If this is just a warning that is printed within the first few minutes after restarting Prometheus, then this is expected and nothing to worry about.
unfortunately the log is constant... can I worry about this?
Unique warning I get after start:
caller=manager.go:389 component="rule manager" group=alertmanager.rules msg="Evaluating rule failed" rule="alert: AlertmanagerConfigInconsistent\nexpr: count_values by(service) (\"config_hash\", alertmanager_config_hash{job=\"prometheus-operator-alertmanager\"})\n / on(service) group_left() label_replace(prometheus_operator_spec_replicas{controller=\"alertmanager\",job=\"prometheus-operator-operator\"},\n \"service\", \"alertmanager-$1\", \"name\", \"(.*)\") != 1\nfor: 5m\nlabels:\n severity: critical\nannotations:\n message: The configuration of the instances of the Alertmanager cluster `{{$labels.service}}`\n are out of sync.\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:container_cpu_usage_seconds_total:sum_rate\nexpr: sum by(namespace, label_name) (sum by(namespace, pod_name) (rate(container_cpu_usage_seconds_total{container_name!=\"\",image!=\"\",job=\"kubelet\"}[5m]))\n * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:container_memory_usage_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(pod_name, namespace) (container_memory_usage_bytes{container_name!=\"\",image!=\"\",job=\"kubelet\"})\n * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_cpu_cores:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_cpu_cores{job=\"kube-state-metrics\"}\n and on(pod) kube_pod_status_scheduled{condition=\"true\"}) * on(namespace, pod) group_left(label_name)\n label_replace(kube_pod_labels{job=\"kube-state-metrics\"}, \"pod_name\", \"$1\", \"pod\",\n \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
caller=manager.go:389 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_memory_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_memory_bytes{job=\"kube-state-metrics\"})\n * on(namespace, pod) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
Seeing a similar regular warning after upgrade to prometheus-operator release 1.7.0
, Prometheus v2.5.0
.
level=warn ts=2019-01-16T15:05:48.970506555Z caller=manager.go:408 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:container_cpu_usage_seconds_total:sum_rate\nexpr: sum by(namespace, label_name) (sum by(namespace, pod_name) (rate(container_cpu_usage_seconds_total{container_name!=\"\",image!=\"\",job=\"kubelet\"}[5m]))\n * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2019-01-16T15:05:48.974490062Z caller=manager.go:408 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:container_memory_usage_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(pod_name, namespace) (container_memory_usage_bytes{container_name!=\"\",image!=\"\",job=\"kubelet\"})\n * on(namespace, pod_name) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2019-01-16T15:05:48.976632336Z caller=manager.go:408 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_memory_bytes:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_memory_bytes{job=\"kube-state-metrics\"})\n * on(namespace, pod) group_left(label_name) label_replace(kube_pod_labels{job=\"kube-state-metrics\"},\n \"pod_name\", \"$1\", \"pod\", \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
level=warn ts=2019-01-16T15:05:48.97984908Z caller=manager.go:408 component="rule manager" group=k8s.rules msg="Evaluating rule failed" rule="record: namespace_name:kube_pod_container_resource_requests_cpu_cores:sum\nexpr: sum by(namespace, label_name) (sum by(namespace, pod) (kube_pod_container_resource_requests_cpu_cores{job=\"kube-state-metrics\"}\n and on(pod) kube_pod_status_scheduled{condition=\"true\"}) * on(namespace, pod) group_left(label_name)\n label_replace(kube_pod_labels{job=\"kube-state-metrics\"}, \"pod_name\", \"$1\", \"pod\",\n \"(.*)\"))\n" err="many-to-many matching not allowed: matching labels must be unique on one side"
did you find some solution @faheem-cliqz ?
We are seeing the exact issue in our cluster with the following recording rules provided by kubernetes-mixin (using kube-prometheus)
Versions:
{
"name": "kubernetes-mixin",
"source": {
"git": {
"remote": "https://github.com/kubernetes-monitoring/kubernetes-mixin",
"subdir": ""
}
},
"version": "ccb787a44f2ebdecbb346d57490fa7e49981b323"
},
k8s logs -l "prometheus=kube-prometheus" -c prometheus | grep "Evaluating rule failed" | gcut -d' ' -f1,2,3,4,5,6,7,8,9 --complement | sort -u | cut -d":" -f3
node_cpu_saturation_load1
node_memory_utilisation
container_cpu_usage_seconds_total
container_memory_usage_bytes
kube_pod_container_resource_requests_cpu_cores
kube_pod_container_resource_requests_memory_bytes
node_cpu_utilisation
node_disk_saturation
node_disk_utilisation
node_memory_bytes_available
node_memory_bytes_total
node_memory_swap_io_bytes
node_net_saturation
node_net_utilisation
node_num_cpu
All of the above rules have the error:
"many-to-many matching not allowed: matching labels must be unique on one side"
Raw Log
I am also experiencing this error on some clusters.
Has anyone found a way to pinpoint which pods are causing the error? Increasing the Prometheus log level to debug doesn't seem to help.
fwiw, my issue was due to prometheus discovering redundant services in another namespace
Any updates on this issue?
I'm getting the same issue when deploy "kube-prometheus-stack" version 48.3.1 using Helm on Google Kubernetes Engine (GKE).
I got this warning in prometheus version v2.3.2 i've change the expressions of
kube_pod_container_resource_requests_memory_bytes
,kube_pod_container_resource_requests_cpu_cores
andnode_num_cpu
addingignoring
insteadon
This is the code:I know that the
ignoring
operator will remove the labels that is inside of brackets, but, the warning it's solved now, but i'm not sure that it's working, i'm trying to testing this alertsCan someone validate it to me?