helm / charts

⚠️(OBSOLETE) Curated applications for Kubernetes
Apache License 2.0
15.49k stars 16.79k forks source link

[stable/prometheus-operator] Error in k8s.rules mixin_pod_workload #19318

Closed esteban1983cl closed 4 years ago

esteban1983cl commented 4 years ago

Describe the bug Prometheus Query Function returns error.

Version of Helm and Kubernetes: Client: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}

Which chart: stable/prometheus-operator

What happened: A Dashboard has failed in grafana. Kubernetes / Compute Resources / Workload. When I use it, I can't see the deployment workload information

What you expected to happen: I want to see deployment workload information

How to reproduce it (as minimally and precisely as possible): Try to reproduce it entering into the grafana dashboard. My prometheus operator deployment has replicas for all its components.

NAME                                                      READY   STATUS      RESTARTS   AGE
alertmanager-prometheus-operator-alertmanager-0           2/2     Running     0          40m
alertmanager-prometheus-operator-alertmanager-1           2/2     Running     0          40m
cost-analyzer-checks-1575340800-ptm29                     0/1     Completed   0          20m
cost-analyzer-checks-1575341400-xwfxj                     0/1     Completed   0          10m
cost-analyzer-checks-1575342000-wjbtq                     0/1     Completed   0          36s
kube-ops-view-6575764f8-5hldk                             1/1     Running     0          10h
kube-ops-view-6575764f8-pfxqx                             1/1     Running     0          4h19m
kube-ops-view-6575764f8-snt6x                             1/1     Running     0          12h
kubecost-cost-analyzer-7ccfd56c7b-drxcm                   3/3     Running     0          14h
prometheus-blackbox-exporter-76d55d5769-6frz7             2/2     Running     0          3h19m
prometheus-blackbox-exporter-76d55d5769-vb4gh             2/2     Running     0          10h
prometheus-cloudwatch-exporter-576b57775-8wqxj            1/1     Running     0          20h
prometheus-cloudwatch-exporter-576b57775-l66wp            1/1     Running     0          3h19m
prometheus-msteams-fbbd7cfd5-64pl2                        1/1     Running     0          3h19m
prometheus-operator-grafana-69c5fb68d5-csbsv              2/2     Running     0          40m
prometheus-operator-grafana-69c5fb68d5-g5ml8              2/2     Running     0          40m
prometheus-operator-grafana-69c5fb68d5-tqgpw              2/2     Running     0          40m
prometheus-operator-kube-state-metrics-764f64bc74-dxgkw   1/1     Running     0          40m
prometheus-operator-kube-state-metrics-764f64bc74-l9j22   1/1     Running     0          40m
prometheus-operator-operator-85bfdb9dc-h229v              1/1     Running     0          40m
prometheus-operator-prometheus-node-exporter-44zww        1/1     Running     0          40m
prometheus-operator-prometheus-node-exporter-4gpwv        1/1     Running     0          39m
prometheus-operator-prometheus-node-exporter-cbdnv        1/1     Running     0          38m
prometheus-operator-prometheus-node-exporter-cfsqz        1/1     Running     0          40m
prometheus-operator-prometheus-node-exporter-dnbsz        1/1     Running     0          39m
prometheus-operator-prometheus-node-exporter-hlb5z        1/1     Running     0          39m
prometheus-operator-prometheus-node-exporter-lc6zv        1/1     Running     0          38m
prometheus-operator-prometheus-node-exporter-mhrbz        1/1     Running     0          20m
prometheus-operator-prometheus-node-exporter-mmn4w        1/1     Running     0          39m
prometheus-operator-prometheus-node-exporter-nt2mr        1/1     Running     0          38m
prometheus-operator-prometheus-node-exporter-r755l        1/1     Running     0          38m
prometheus-operator-prometheus-node-exporter-sk9wj        1/1     Running     0          38m
prometheus-operator-prometheus-node-exporter-vdrkj        1/1     Running     0          39m
prometheus-prometheus-operator-prometheus-0               3/3     Running     0          20m
prometheus-prometheus-operator-prometheus-1               3/3     Running     0          30m
prometheus-statsd-58c5487c57-bcdnx                        1/1     Running     0          10h
prometheus-statsd-58c5487c57-p2w7f                        1/1     Running     0          3h19m
prometheus-statsd-58c5487c57-vdppx                        1/1     Running     0          21h

Anything else we need to know:

In prometheus I found the function that's returning an error:

sum(
  label_replace(
    label_replace(
       kube_pod_owner{job="kube-state-metrics", owner_kind="ReplicaSet"},
         "replicaset", "$1", "owner_name", "(.*)"
       ) * on(replicaset, namespace) group_left(owner_name) kube_replicaset_owner{job="kube-state-metrics"},
      "workload", "$1", "owner_name", "(.*)"
  )
) by (namespace, workload, pod)

Error message

Error executing query: found duplicate series for the match group {namespace="airflow", replicaset="airflow-flower-64b496579c"} on the right hand-side of the operation: [{__name__="kube_replicaset_owner", endpoint="http", instance="xxx.xxx.xxx.xxx:8080", job="kube-state-metrics", namespace="airflow", owner_is_controller="true", owner_kind="Deployment", owner_name="airflow-flower", pod="prometheus-operator-kube-state-metrics-764f64bc74-l9j22", replicaset="airflow-flower-64b496579c", service="prometheus-operator-kube-state-metrics"}, {__name__="kube_replicaset_owner", endpoint="http", instance="xxx.xxx.xxx.xxx:8080", job="kube-state-metrics", namespace="airflow", owner_is_controller="true", owner_kind="Deployment", owner_name="airflow-flower", pod="prometheus-operator-kube-state-metrics-764f64bc74-dxgkw", replicaset="airflow-flower-64b496579c", service="prometheus-operator-kube-state-metrics"}];many-to-many matching not allowed: matching labels must be unique on one side

There is the path https://github.com/helm/charts/blob/43f8f95b6d6ead8732390ca33cca9cbffb802f11/stable/prometheus-operator/templates/prometheus/rules/k8s.rules.yaml#L47

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

stale[bot] commented 4 years ago

This issue is being automatically closed due to inactivity.

jeanlucmongrain commented 4 years ago

close issue just because of inactivity don't fix the issue