Open cscowx opened 1 month ago
This issue is currently awaiting triage.
If prometheus-adapter contributors determine this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
My kube-apiserver.conf configuration is as follows:
root@k8s-master-01:/opt/kubernetes/cfg# cat kube-apiserver.conf KUBE_APISERVER_OPTS="--v=2 \ --etcd-servers=https://etcd-01.etcd.cluster:2379,https://etcd-02.etcd.cluster:2379,https://etcd-03.etcd.cluster:2379 \ --api-audiences=api \ --etcd-cafile=/ssl/demoCA/newcerts/etcd-ca.crt \ --bind-address=192.168.110.210 \ --secure-port=6443 \ --advertise-address=192.168.110.210 \ --allow-privileged=true \ --service-cluster-ip-range=10.96.0.0/12 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --anonymous-auth=false \ --authorization-mode=RBAC,Node \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-32767 \ --kubelet-client-certificate=/ssl/server/certs/kube-apiserver.crt \ --kubelet-client-key=/ssl/server/certs/kube-apiserver.key \ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ --tls-cert-file=/ssl/server/certs/kube-apiserver.crt \ --tls-private-key-file=/ssl/server/certs/kube-apiserver.key \ --client-ca-file=/ssl/demoCA/newcerts/k8s-ca.crt \ --service-account-key-file=/ssl/server/certs/kube-apiserver.crt \ --service-account-issuer=https://kubernetes.default.svc.cluster.local \ --service-account-signing-key-file=/ssl/server/certs/kube-apiserver.key \ --enable-bootstrap-token-auth=true \ --requestheader-client-ca-file=/ssl/demoCA/newcerts/k8s-ca.crt \ --proxy-client-cert-file=/ssl/server/certs/kube-apiserver.crt \ --proxy-client-key-file=/ssl/server/certs/kube-apiserver.key \ --requestheader-allowed-names=aggregator \ --requestheader-group-headers=X-Remote-Group \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-username-headers=X-Remote-User \ --enable-aggregator-routing=true \ --audit-log-mode=batch \ --audit-log-maxage=365 \ --audit-log-maxbackup=365 \ --audit-log-maxsize=50 \ --audit-log-path=/opt/kubernetes/logs/audit.log \ --audit-log-batch-buffer-size=20000 \ --audit-log-batch-max-size=100 \ --audit-log-batch-max-wait=5s \ --audit-log-batch-throttle-enable \ --audit-log-batch-throttle-qps=10 \ --audit-log-batch-throttle-burst=15 \ --max-mutating-requests-inflight=500 \ --max-requests-inflight=800"
The software environment is as follows:
Kubernetes version: 1.32.3 Installation method: Binary deployment Host OS: ubuntu22.04 CNI and version: Calico v3.29.3 CRI and version: containerd://2.0.4 haproxy/VIP:192.168.110.208【k8s-lb-vip.k8s.cluster】
一、Deploy metrics-server 【Normal use】
metrics installed fine, and kubectl top node should output it
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability-1.21+.yaml -O metrics-server-ha.yaml
kubectl apply -f metrics-server-ha.yaml
apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules:
watch
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules:
watch
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects:
kind: ServiceAccount name: metrics-server namespace: kube-system
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects:
kind: ServiceAccount name: metrics-server namespace: kube-system
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects:
kind: ServiceAccount name: metrics-server namespace: kube-system
apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports:
name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server
apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: replicas: 2 selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 1 template: metadata: labels: k8s-app: metrics-server spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution:
name: ca-certificates hostPath: path: /ssl
apiVersion: policy/v1 kind: PodDisruptionBudget metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: minAvailable: 1 selector: matchLabels: k8s-app: metrics-server
apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100
二、Deploy prometheus adapter
wget -P /download/ https://github.com/prometheus-community/helm-charts/releases/download/prometheus-adapter-4.14.1/prometheus-adapter-4.14.1.tgz
cd /download tar -zxvf prometheus-adapter-4.14.1.tgz
cd /download/prometheus-adapter
helm template myrelease /download/prometheus-adapter --output-dir /download/prometheus-adapter/output
ls /download/prometheus-adapter/output/prometheus-adapter/templates
cluster-role-binding-auth-delegator.yaml configmap.yaml custom-metrics-cluster-role.yaml service.yaml cluster-role-binding-resource-reader.yaml custom-metrics-apiservice.yaml deployment.yaml serviceaccount.yaml cluster-role-resource-reader.yaml custom-metrics-cluster-role-binding-hpa.yaml role-binding-auth-reader.yaml
I modified the deployment.yam file as follows:
args:
*PS:All other .yaml is the default and has not been changed**
kubectl create -f /download/prometheus-adapter/output/prometheus-adapter/templates/
Displaying 404 error Message: failing or missing response from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: bad status from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: 404
kubectl describe apiservices v1.custom.metrics.k8s.io
Name: v1.custom.metrics.k8s.io Namespace:
API Version: apiregistration.k8s.io/v1
Kind: APIService
Metadata:
Creation Timestamp: 2025-04-10T06:54:15Z
Resource Version: 1104493
UID: 82a33cfb-118f-436b-8a6c-da87a55c0ca7
Spec:
Group: custom.metrics.k8s.io
Group Priority Minimum: 100
Insecure Skip TLS Verify: true
Service:
Name: myrelease-prometheus-adapter
Namespace: monitoring
Port: 443
Version: v1
Version Priority: 100
Status:
Conditions:
Last Transition Time: 2025-04-10T06:54:15Z
Message: failing or missing response from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: bad status from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: 404
Reason: FailedDiscoveryCheck
Status: False
Type: Available
Events:
Labels: app.kubernetes.io/component=metrics app.kubernetes.io/instance=myrelease app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=prometheus-adapter app.kubernetes.io/part-of=prometheus-adapter app.kubernetes.io/version=v0.12.0 helm.sh/chart=prometheus-adapter-4.14.1 Annotations:
journalctl -f -u kube-apiserver
Apr 10 16:20:34 k8s-master-01 kube-apiserver[1389104]: E0410 16:20:34.252015 1389104 remote_available_controller.go:448] "Unhandled Error" err="v1.custom.metrics.k8s.io failed with: failing or missing response from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: bad status from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: 404" logger="UnhandledError" Apr 10 16:20:35 k8s-master-01 kube-apiserver[1389104]: I0410 16:20:35.834092 1389104 apf_controller.go:493] "Update CurrentCL" plName="exempt" seatDemandHighWatermark=1 seatDemandAvg=0.0065918790569620325 seatDemandStdev=0.08092234665072694 seatDemandSmoothed=0.10054696971198927 fairFrac=2.330357142857143 currentCL=1 concurrencyDenominator=1 backstop=false Apr 10 16:20:42 k8s-master-01 kube-apiserver[1389104]: E0410 16:20:42.752875 1389104 remote_available_controller.go:448] "Unhandled Error" err="v1.custom.metrics.k8s.io failed with: failing or missing response from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: bad status from https://10.244.134.39:6443/apis/custom.metrics.k8s.io/v1: 404" logger="UnhandledError"
What is the reason, please help me see, thank you