Describe the bug a clear and concise description of what the bug is.
We have installed the kube-prometheus-stack using helm chart on our Azure East and West Region clusters
But the "kunbernetes API server" charts are not working i.e. not showing any data . all configurations are correct.
Another thing is that "kunbernetes API server" charts are showing data for East Side clusters but for not West Side one
Below are the errors which we have found from "prometheus" container logs running on West Cluster where it is not working
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:28:30Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/arm64"} Kustomize Version: v4.5.7 Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6", GitCommit:"8ca0b02ea721e1631a58bd9c59073608b8b632a5", GitTreeState:"clean", BuildDate:"2023-05-19T03:29:42Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}
Which chart?
kube-prometheus-stack
What's the chart version?
42.3.0
What happened?
kubernetes-api-server chars are not getting populated
What you expected to happen?
kubernetes-api-server chars should show data in grafana
How to reproduce it?
NA
Enter the changed values of values.yaml?
No changes
Enter the command that you execute and failing/misfunctioning.
Describe the bug a clear and concise description of what the bug is.
We have installed the kube-prometheus-stack using helm chart on our Azure East and West Region clusters But the "kunbernetes API server" charts are not working i.e. not showing any data . all configurations are correct. Another thing is that "kunbernetes API server" charts are showing data for East Side clusters but for not West Side one
Below are the errors which we have found from "prometheus" container logs running on West Cluster where it is not working
ts=2023-06-15T05:29:15.423Z caller=klog.go:116 level=error component=k8s_client_runtime func=ErrorDepth msg="pkg/mod/k8s.io/client-go@v0.25.1/tools/cache/reflector.go:169: Failed to watch *v1.Pod: Get \"https://XXX.XXXXXXX:443/api/v1/namespaces/prom/pods?allowWatchBookmarks=true&resourceVersion=126990483&timeout=9m18s&timeoutSeconds=558&watch=true\"
Basically it shows "Failed to watch *v1.Pod" in this one
Below are the errors which we have found from "prometheus" container logs running on East Cluster where it is working
level=error component=k8s_client_runtime func=ErrorDepth msg="pkg/mod/k8s.io/client-go@v0.25.1/tools/cache/reflector.go:169: Failed to watch v1.Pod: Get \"https://XXXXX:443/api/v1/namespaces/default/pods?allowWatchBookmarks=true&resourceVersion=360103230&timeout=5m25s&timeoutSeconds=325&watch=true\": context canceled" ts=2023-05-17T15:15:31.122Z caller=kubernetes.go:326 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config" ts=2023-05-17T15:15:31.122Z caller=kubernetes.go:326 level=info component="discovery manager scrape" discovery=kubernetes msg="Using pod service account via in-cluster config" ts=2023-05-17T15:15:31.122Z caller=klog.go:124 level=error component=k8s_client_runtime func=Errorf msg="Unexpected error when reading response body: context canceled" ts=2023-05-17T15:15:31.122Z caller=kubernetes.go:326 level=info component="discovery manager notify" discovery=kubernetes msg="Using pod service account via in-cluster config" ts=2023-05-17T15:15:31.123Z caller=klog.go:116 level=error component=k8s_client_runtime func=ErrorDepth msg="pkg/mod/k8s.io/client-go@v0.25.1/tools/cache/reflector.go:169: Failed to watch v1.Service: Get \"https://XXXXX:443/api/v1/namespaces/prom/services?allowWatchBookmarks=true&resourceVersion=360097750&timeout=8m15s&timeoutSeconds=495&watch=true\": context canceled" ts=2023-05-17T15:15:31.182Z caller=main.go:1221
Basically it shows "Failed to watch v1.Pod" and "Failed to watch v1.Service" in this one
so why the chars are not getting populated ,pls throw some insights here
What's your helm version?
version.BuildInfo{Version:"v3.10.1", GitCommit:"9f88ccb6aee40b9a0535fcc7efea6055e1ef72c9", GitTreeState:"clean", GoVersion:"go1.19.2"}
What's your kubectl version?
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.4", GitCommit:"872a965c6c6526caa949f0c6ac028ef7aff3fb78", GitTreeState:"clean", BuildDate:"2022-11-09T13:28:30Z", GoVersion:"go1.19.3", Compiler:"gc", Platform:"darwin/arm64"} Kustomize Version: v4.5.7 Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6", GitCommit:"8ca0b02ea721e1631a58bd9c59073608b8b632a5", GitTreeState:"clean", BuildDate:"2023-05-19T03:29:42Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}
Which chart?
kube-prometheus-stack
What's the chart version?
42.3.0
What happened?
kubernetes-api-server chars are not getting populated
What you expected to happen?
kubernetes-api-server chars should show data in grafana
How to reproduce it?
NA
Enter the changed values of values.yaml?
No changes
Enter the command that you execute and failing/misfunctioning.
no command to mention here
Anything else we need to know?
NA