kubernetes-sigs / prometheus-adapter

An implementation of the custom.metrics.k8s.io API using Prometheus
Apache License 2.0
1.98k stars 562 forks source link

【 Help 】apiservices v1.custom.meters.k8s.io for prometheus-adapter is not available #694

Open cscowx opened 1 month ago

cscowx commented 1 month ago

The software environment is as follows:

Kubernetes version: 1.32.3 Installation method: Binary deployment Host OS: ubuntu22.04 CNI and version: Calico v3.29.3 CRI and version: containerd://2.0.4 haproxy/VIP:192.168.110.208【k8s-lb-vip.k8s.cluster】

一、Deploy metrics-server 【Normal use】

metrics installed fine, and kubectl top node should output it

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability-1.21+.yaml -O metrics-server-ha.yaml

kubectl apply -f metrics-server-ha.yaml


apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules:

二、Deploy prometheus adapter

wget -P /download/ https://github.com/prometheus-community/helm-charts/releases/download/prometheus-adapter-4.14.1/prometheus-adapter-4.14.1.tgz

cd /download tar -zxvf prometheus-adapter-4.14.1.tgz

cd /download/prometheus-adapter

helm template myrelease /download/prometheus-adapter --output-dir /download/prometheus-adapter/output

ls /download/prometheus-adapter/output/prometheus-adapter/templates

cluster-role-binding-auth-delegator.yaml configmap.yaml custom-metrics-cluster-role.yaml service.yaml cluster-role-binding-resource-reader.yaml custom-metrics-apiservice.yaml deployment.yaml serviceaccount.yaml cluster-role-resource-reader.yaml custom-metrics-cluster-role-binding-hpa.yaml role-binding-auth-reader.yaml

I modified the deployment.yam file as follows:

args:

*PS:All other .yaml is the default and has not been changed**

kubectl create -f /download/prometheus-adapter/output/prometheus-adapter/templates/

Displaying 404 error Message: failing or missing response from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: bad status from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: 404

kubectl describe apiservices v1.custom.metrics.k8s.io

Name: v1.custom.metrics.k8s.io Namespace:
Labels: app.kubernetes.io/component=metrics app.kubernetes.io/instance=myrelease app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=prometheus-adapter app.kubernetes.io/part-of=prometheus-adapter app.kubernetes.io/version=v0.12.0 helm.sh/chart=prometheus-adapter-4.14.1 Annotations: API Version: apiregistration.k8s.io/v1 Kind: APIService Metadata: Creation Timestamp: 2025-04-10T06:54:15Z Resource Version: 1104493 UID: 82a33cfb-118f-436b-8a6c-da87a55c0ca7 Spec: Group: custom.metrics.k8s.io Group Priority Minimum: 100 Insecure Skip TLS Verify: true Service: Name: myrelease-prometheus-adapter Namespace: monitoring Port: 443 Version: v1 Version Priority: 100 Status: Conditions: Last Transition Time: 2025-04-10T06:54:15Z Message: failing or missing response from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: bad status from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: 404 Reason: FailedDiscoveryCheck Status: False Type: Available Events:

journalctl -f -u kube-apiserver

Apr 10 16:20:34 k8s-master-01 kube-apiserver[1389104]: E0410 16:20:34.252015 1389104 remote_available_controller.go:448] "Unhandled Error" err="v1.custom.metrics.k8s.io failed with: failing or missing response from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: bad status from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: 404" logger="UnhandledError" Apr 10 16:20:35 k8s-master-01 kube-apiserver[1389104]: I0410 16:20:35.834092 1389104 apf_controller.go:493] "Update CurrentCL" plName="exempt" seatDemandHighWatermark=1 seatDemandAvg=0.0065918790569620325 seatDemandStdev=0.08092234665072694 seatDemandSmoothed=0.10054696971198927 fairFrac=2.330357142857143 currentCL=1 concurrencyDenominator=1 backstop=false Apr 10 16:20:42 k8s-master-01 kube-apiserver[1389104]: E0410 16:20:42.752875 1389104 remote_available_controller.go:448] "Unhandled Error" err="v1.custom.metrics.k8s.io failed with: failing or missing response from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: bad status from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: 404" logger="UnhandledError"

What is the reason, please help me see, thank you

k8s-ci-robot commented 1 month ago

This issue is currently awaiting triage.

If prometheus-adapter contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
cscowx commented 1 month ago

My kube-apiserver.conf configuration is as follows:

root@k8s-master-01:/opt/kubernetes/cfg# cat kube-apiserver.conf KUBE_APISERVER_OPTS="--v=2 \ --etcd-servers=https://etcd-01.etcd.cluster:2379,https://etcd-02.etcd.cluster:2379,https://etcd-03.etcd.cluster:2379 \ --api-audiences=api \ --etcd-cafile=/ssl/demoCA/newcerts/etcd-ca.crt \ --bind-address=192.168.110.210 \ --secure-port=6443 \ --advertise-address=192.168.110.210 \ --allow-privileged=true \ --service-cluster-ip-range=10.96.0.0/12 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --anonymous-auth=false \ --authorization-mode=RBAC,Node \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-32767 \ --kubelet-client-certificate=/ssl/server/certs/kube-apiserver.crt \ --kubelet-client-key=/ssl/server/certs/kube-apiserver.key \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --tls-cert-file=/ssl/server/certs/kube-apiserver.crt \ --tls-private-key-file=/ssl/server/certs/kube-apiserver.key \ --client-ca-file=/ssl/demoCA/newcerts/k8s-ca.crt \ --service-account-key-file=/ssl/server/certs/kube-apiserver.crt \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --service-account-signing-key-file=/ssl/server/certs/kube-apiserver.key \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/ssl/demoCA/newcerts/k8s-ca.crt \ --proxy-client-cert-file=/ssl/server/certs/kube-apiserver.crt \ --proxy-client-key-file=/ssl/server/certs/kube-apiserver.key \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User \ --enable-aggregator-routing=true \ --audit-log-mode=batch \ --audit-log-maxage=365 \ --audit-log-maxbackup=365 \ --audit-log-maxsize=50 \ --audit-log-path=/opt/kubernetes/logs/audit.log \ --audit-log-batch-buffer-size=20000 \ --audit-log-batch-max-size=100 \ --audit-log-batch-max-wait=5s \ --audit-log-batch-throttle-enable \ --audit-log-batch-throttle-qps=10 \ --audit-log-batch-throttle-burst=15 \ --max-mutating-requests-inflight=500 \ --max-requests-inflight=800"