Closed andrey-gava closed 1 year ago
Hi @andrey-gava, try to add
kube-state-metrics:
releaseLabel: true
To your chart values.
Hi @andrey-gava, try to add
kube-state-metrics: releaseLabel: true
To your chart values.
Its already defined inside default values and set to true. https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/values.yaml#L1322
Can confirm same issue here, also pointing to 30.0.1.
My values.yaml
prometheus:
prometheusSpec:
retention: 3d
replicas: 1
podAntiAffinity: "hard"
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: longhorn
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: 100m
memory: 1Gi
nodeExporter:
enabled: true
kubeStateMetrics:
enabled: true
kube-state-metrics:
releaseLabel: true
selfMonitor:
enabled: false
I'm facing the same issue. I've added the mentioned option:
diff --git a/deploy/releases/releases.yaml b/deploy/releases/releases.yaml
index 8a1d217..e4d490d 100644
--- a/deploy/releases/releases.yaml
+++ b/deploy/releases/releases.yaml
@@ -163,6 +163,7 @@ releases:
kube-state-metrics:
podSecurityPolicy:
enabled: true
+ releaseLabel: true
prometheus:
prometheusSpec:
podMonitorSelector:
But the deployment didn't show that in its output and I'm still missing the metrics.
I'm using the version 30.1.0 of this chart.
Hi friends, I face the similar situation. (Missing default datasource from grafana)
And I found it's not chart bugs, it's grafana datasource container scrape data via ssl, so edit ssl setting can solve the problem. #1532
↓↓ my grafana datasource container logs
In my case I found what was the problem. When I disabled the defaultRules, I thought that they are only for alertmanager, but they are creatting new metrics names from nested query's. Now when I set it to true, everything works.
defaultRules:
create: true
I'm not sure if people are reporting about the same version.
Using the Helm Chart, for version 30.1.0, CPU metrics do not appear in Grafana by default.
This is on single k3s node, k8s v1.22.
Adding kube-state-metrics.releaseLabel: true
or defaultRules.create: true
to your values.yml, makes no difference. Those are set by default anyway.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Any news on this ? I'm facing the same issue
This is still the case:
K8s Version: v1.22.6+k3s1 Chart Version: 34.1.1
I also have the same problem, strange that upgrade from 23.1.1 to 34.8.0 went smoothly, but from 30.1.0 to 34.8.0 apparently not and now I'm missing metrics in Grafana and Lens. What's more, removing and installing Chart again occurs with the same problem regardless the version (removing together with crd's, as it's stated in installation manual). It looks like something still remains on k8s after removing it and not being upgraded by Helm installation.
K8s Version: v1.21.5-eks-bc4871b
EDIT: I found that after uninstalling Helm Chart, it's leaving service object related to kubelet: service/prometheus-stack-kube-prom-kubelet in kube-system namespace
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Hilfe
I'm using chart version kube-prometheus-stack-35.2.0. The prometheus queries show node data like CPU and memory, but none of the container metrics like container_memory_usage_bytes are showing up. Anyone know if it's a setting that would prevent these metrics from showing up in the helm values.yaml or if this would be a bug?
Update: I went back to the prometheus-community/prometheus helm chart 15.10.1, installed that one and I can see all the container metrics on prometheus/thanos to grafana, so I'm not sure what setting in kube-prometheus-stack in the values would make the container metrics not show up.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is still relevant and yes, we agreed, it needs an input from the Prometheus people. :wink:
I have the same issue, none of the proposed workarounds fix it.
Same issue here, networking charts dissappeared for me.
I created a ticket for networking charts #2362
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
I'm having a similar issue but for me it seems to be related to specific nodes. We habe several clusters in multiple environments. Clusters are setup via kubespray. Kube-Prometheus-Stack worked fine some time ago. Now CPU metrics are missing for Pods on specific nodes only. I removed the whole stack, crds, service in kube-system namespace, reinstalled different version, removed all customized values. Still the same nodes won't have metrics for all pods running on them. For example 1 cluster with 3 workers has only worker 2 showing metrics for the pods. Worker 1 and 3 never have metrics.
Any suggestions?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
What Grafana version and what operating system are you using? 9.2
What are you trying to achieve? upgrading kube-prometheus-stack from 10.0.2 to latest version
How are you trying to achieve it? Upgrading step by step from kube-prometheus-stack 10.0.2 to 41.9.1 through helm chart using the official documentation provided by helm => kube-prometheus-stack 41.9.1 · prometheus/prometheus-community
What happened? During the upgrade process from the version 21.x to 22.x, there is an instruction to delete the kube-state-metrics [ kubectl delete deployments.apps -l app.kubernetes.io/instance=prometheus-operator,app.kubernetes.io/name=kube-state-metrics --cascade=orphan]
What did you expect to happen? I lost all my metrics data,even though PVC is mapped to prometheus post upgrade to 22.x, I am not able to see the history data. I am expecting all my historic data should retain post upgrade to latest version
Can you copy/paste the configuration(s) that you are having problems with? all the older data is lost
Did you receive any errors in the Grafana UI or in related logs? If so, please tell us exactly what they were. Grafana dashboards view is also changed post upgrade
Did you follow any online instructions? If so, what is the URL? I followed the below URL for the upgrade process kube-prometheus-stack 41.9.1 · prometheus/prometheus-community
Now i am planning to perform the similar upgrade in my Production environment, but the historical data is very critical and cannot lose it. How can i upgrade without losing my historic data? otherwise did i missed anywhere to update any parameters in the values file? Please advise, thank you in advance!!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
i have the same issue Chart version 44.3.0
I have the same issue too with chart kube-prometheus-stack-51.0.3.
does anyone solved this issue??? whats wrong with this kube-prom-stack helm chart. regardless of the version it doesnt showing up the any data in grafana default dahsboards
same here with chart 51.10 clean install on AKS running 1.24.9
we want to remove all these alerts rules
based on this article https://grafana.com/docs/grafana-cloud/monitor-infrastructure/kubernetes-monitoring/configuration/configure-infrastructure-manually/helm-operator-migration/import_rules/#disable-local-prometheus-rules-evaluation I tried disabling evaluation of local Prometheus rules.
however, setting defaultRules: create: false causes CPU metrics to be lost. In my case there is no other problem. i.e. I do have memory metrics
it looks like recording rules are needed (https://grafana.com/docs/grafana/latest/alerting/fundamentals/alert-rules/recording-rules/). If you set defaultRules: create: false these alerting rules are not created and you lose metrics.
I have the same issue too chart 56.01 k8s v1.26.5
facing same issue
cant see any metrics in dashboard id 315
have everything up and running
Kubectl version: client (1.22) and server (1.28)
helm info:
NAME CHART VERSION APP VERSION
prometheus-community/kube-prometheus-stack 61.7.0 v0.75.2
Describe the bug a clear and concise description of what the bug is.
Fresh install of chart. Part of Grafana dashboards missing values. For example CPU quota, CPU usage etc. Several dashboards a completely empty (Compute Resources Workload, Networking Namespace Workload) I look in prometheus and there none metrics which have name that starts from "nodenamespace" I tried uninstall and install version 30.0.0 but things are the same
Screens from my external grafana, in the bundled grafana everything the same.
What's your helm version?
3.5.1
What's your kubectl version?
v1.19
Which chart?
kube-prometheus-stack
What's the chart version?
30.0.1
Enter the changed values of values.yaml?
Enter the command that you execute and failing/misfunctioning.
helm upgrade --install -n monitoring kube-prometheus-stack ./kube-prometheus-stack-30.0.1.tgz -f custom-values.yaml --atomic