kubernetes-sigs / prometheus-adapter

An implementation of the custom.metrics.k8s.io API using Prometheus
Apache License 2.0
1.92k stars 554 forks source link

How to resolve failing or missing response / Address is not allowed from custom kubernetes api service? #543

Closed Johannes1509 closed 1 year ago

Johannes1509 commented 1 year ago

What happened?: The API service "v1beta1.custom.metrics.k8s.io" changes the available status not to ready due to the error failing or missing response from https://192.168.2.20:6443/apis/custom.metrics.k8s.io/v1beta1: Get "https://192.168.2.20:6443/apis/custom.metrics.k8s.io/v1beta1": Address is not allowed does not change to available status

What did you expect to happen?: The Apiservice automatically connects to the Prometheus adapter and is marked as Available

Please provide the prometheus-adapter config:

prometheus-adapter config `apiVersion: v1 data: config.yaml: | rules: - seriesQuery: 'container_threads{namespace!="",pod!=""}' resources: overrides: namespace: {resource: "namespace"} pod: {resource: "pod"} name: as: "container_threads" metricsQuery: 'sum(<<.Series>>{<<.LabelMatchers>>}) by (<<.GroupBy>>)' kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: adapter-test meta.helm.sh/release-namespace: monitoring creationTimestamp: "2022-11-29T11:00:35Z" labels: app.kubernetes.io/component: metrics app.kubernetes.io/instance: adapter-test app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/part-of: prometheus-adapter app.kubernetes.io/version: v0.10.0 helm.sh/chart: prometheus-adapter-3.4.2 name: adapter-test-prometheus-adapter namespace: monitoring resourceVersion: "644830626" uid: 7f7ed83b-854e-4429-9755-827a712fb0d9 `

Anything else we need to know?: The config of the apiservice looks like this:

> apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: annotations: meta.helm.sh/release-name: adapter-test meta.helm.sh/release-namespace: monitoring creationTimestamp: "2022-11-29T11:00:36Z" labels: app.kubernetes.io/component: metrics app.kubernetes.io/instance: adapter-test app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: prometheus-adapter app.kubernetes.io/part-of: prometheus-adapter app.kubernetes.io/version: v0.10.0 helm.sh/chart: prometheus-adapter-3.4.2 name: v1beta1.custom.metrics.k8s.io resourceVersion: "644865077" uid: 50e1d07b-5db8-49b0-92d3-af1ec581a096 spec: group: custom.metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: adapter-test-prometheus-adapter namespace: monitoring port: 443 version: v1beta1 versionPriority: 100 status: conditions: - lastTransitionTime: "2022-11-29T11:00:36Z" message: 'failing or missing response from https://192.168.2.20:6443/apis/custom.metrics.k8s.io/v1beta1: Get "https://192.168.2.20:6443/apis/custom.metrics.k8s.io/v1beta1": Address is not allowed' reason: FailedDiscoveryCheck status: "False" type: Available

Important: On the other hand, I can use the curl command to reach the service "https://192.168.2.20:6443/apis/custom.metrics.k8s.io/v1beta1" from a PC with kubectl without any problems: I get for the command

curl -k --header "Authorization: Bearer <<MYTOKEN>>" "https://172.20.44.186:443/apis/custom.metrics.k8s.io/v1beta1"

the answer: { "kind": "APIResourceList", "apiVersion": "v1", "groupVersion": "custom.metrics.k8s.io/v1beta1", "resources": [ { "name": "namespaces/container_threads", "singularName": "", "namespaced": false, "kind": "MetricValueList", "verbs": [ "get" ] }, { "name": "pods/container_threads", "singularName": "", "namespaced": true, "kind": "MetricValueList", "verbs": [ "get" ] } ] }

Environment:

Has anybody an idea to fix this? Which Prometheus adapter logs of which timeslice should I attach in this case?

olivierlemasle commented 1 year ago

@Johannes1509 You should probably use hostNetwork mode:

Cf. https://github.com/prometheus-community/helm-charts/blob/fd13752dc791672a34298b35517756ae3ffe3c33/charts/prometheus-adapter/values.yaml#L186-L195

Johannes1509 commented 1 year ago

This fixed the issue thx!