kubernetes-sigs / prometheus-adapter

An implementation of the custom.metrics.k8s.io API using Prometheus
Apache License 2.0
1.92k stars 554 forks source link

No Namespace Queries not working #530

Closed aroelo closed 1 year ago

aroelo commented 2 years ago

I'm trying to use an external metric to scale a workload using the HPA, but the external metric is in a different namespace than the workload.

I have set up the external rule as described in the docs here: https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/docs/externalmetrics.md#namespacing

In my case like this:

    - seriesQuery: 'queueSize{queue=~"my_queue_.*"}'
      resources:
        namespaced: false
      metricsQuery: 'sum(<<.Series>>{<<.LabelMatchers>>, queue=~"my_queue_.*"}) by (<<.GroupBy>>)'

However when I list the metrics it still says that it is namespaced, from the output of kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1":

    {
      "name": "queueSize",
      "singularName": "",
      "namespaced": true,
      "kind": "ExternalMetricValueList",
      "verbs": [
        "get"
      ]
    } 

I'm using the latest release (v0.10.0) for the prometheus adapter.

Could anyone help me figure out why this keeps using the namespace in the query? Am I missing something?

shubham-bansal96 commented 2 years ago

I am facing same issue. Below mentioned is my configuration for prometheus.

rules:
  default: false
  external:
  - seriesQuery: '{__name__="http_requests_total",path!="",job="router"}'
    metricsQuery: '(sum (rate(<<.Series>>{<<.LabelMatchers>>}[2m])) by(path))'  
    resources:
      namespaced: false

Output

kubectl get --raw /apis/external.metrics.k8s.io/v1beta1

{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "external.metrics.k8s.io/v1beta1",
  "resources": [
    {
      "name": "http_requests_total",
      "singularName": "",
      "namespaced": true,
      "kind": "ExternalMetricValueList",
      "verbs": [
        "get"
      ]
    }
  ]
}

Series looks like this Screenshot from 2022-09-13 11-16-29

Joibel commented 2 years ago

I have the same issue of it saying the resource is namespaced. However I am successfully querying that metric from an HPA in a different namespace.

    - type: External
      external:
        metric:
          name: queueSize
          selector:
            matchLabels:
              somelabel: somevalue
        target:
          type: Value
          value: 1

So I think the display of the metric is lying only.

Joibel commented 2 years ago

Actually, in my case at least, I can issue the query to prom adapter with any namespace (even a non-existent one), and it'll return the correct value for that metric.

jhwbarlow commented 1 year ago

From my interpretation of the docs here, this is because k8s requires the resource to be namespaced, so it cannot report "namespaced": false

However, that does not mean the namespace is included in the query labels - the namespace label is excluded if you set the following on the external rule.

      resources:
        namespaced: false

Because of this, the Prometheus Adapter will return the same external metrics no matter what namespace you specify (it just ignores the namespace, as @Joibel says)

For example: kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/foo/confluent_kafka_server_consumer_lag_offsets" | jq and kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/bar/confluent_kafka_server_consumer_lag_offsets" | jq return the same metrics, even though namespaces foo and bar do not even exist.

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes-sigs/prometheus-adapter/issues/530#issuecomment-1510419373): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.