onedr0p / home-ops

Wife approved HomeOps driven by Kubernetes and GitOps using Flux
https://onedr0p.github.io/home-ops/
Do What The F*ck You Want To Public License
1.98k stars 179 forks source link

fix(container): update kube-prometheus-stack ( 65.1.0 → 65.1.1 ) #8195

Closed bot-ross[bot] closed 2 days ago

bot-ross[bot] commented 3 days ago

This PR contains the following updates:

Package Update Change
kube-prometheus-stack (source) patch 65.1.0 -> 65.1.1

Release Notes

prometheus-community/helm-charts (kube-prometheus-stack) ### [`v65.1.1`](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-65.1.0...kube-prometheus-stack-65.1.1) [Compare Source](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-65.1.0...kube-prometheus-stack-65.1.1)

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.



This PR has been generated by Renovate Bot.

bot-ross[bot] commented 3 days ago
--- kubernetes/main/apps/observability/kube-prometheus-stack/app Kustomization: flux-system/kube-prometheus-stack HelmRelease: observability/kube-prometheus-stack

+++ kubernetes/main/apps/observability/kube-prometheus-stack/app Kustomization: flux-system/kube-prometheus-stack HelmRelease: observability/kube-prometheus-stack

@@ -13,13 +13,13 @@

     spec:
       chart: kube-prometheus-stack
       sourceRef:
         kind: HelmRepository
         name: prometheus-community
         namespace: flux-system
-      version: 65.1.0
+      version: 65.1.1
   dependsOn:
   - name: prometheus-operator-crds
     namespace: observability
   - name: openebs
     namespace: openebs-system
   install:
bot-ross[bot] commented 3 days ago
--- HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-etcd

+++ HelmRelease: observability/kube-prometheus-stack PrometheusRule: observability/kube-prometheus-stack-etcd

@@ -19,29 +19,29 @@

       annotations:
         description: 'etcd cluster "{{ $labels.job }}": members are down ({{ $value
           }}).'
         summary: etcd cluster members are down.
       expr: |-
         max without (endpoint) (
-          sum without (instance) (up{job=~".*etcd.*"} == bool 0)
+          sum without (instance, pod) (up{job=~".*etcd.*"} == bool 0)
         or
           count without (To) (
-            sum without (instance) (rate(etcd_network_peer_sent_failures_total{job=~".*etcd.*"}[120s])) > 0.01
+            sum without (instance, pod) (rate(etcd_network_peer_sent_failures_total{job=~".*etcd.*"}[120s])) > 0.01
           )
         )
         > 0
       for: 10m
       labels:
         severity: critical
     - alert: etcdInsufficientMembers
       annotations:
         description: 'etcd cluster "{{ $labels.job }}": insufficient members ({{ $value
           }}).'
         summary: etcd cluster has insufficient number of members.
-      expr: sum(up{job=~".*etcd.*"} == bool 1) without (instance) < ((count(up{job=~".*etcd.*"})
-        without (instance) + 1) / 2)
+      expr: sum(up{job=~".*etcd.*"} == bool 1) without (instance, pod) < ((count(up{job=~".*etcd.*"})
+        without (instance, pod) + 1) / 2)
       for: 3m
       labels:
         severity: critical
     - alert: etcdNoLeader
       annotations:
         description: 'etcd cluster "{{ $labels.job }}": member {{ $labels.instance
@@ -55,13 +55,13 @@

       annotations:
         description: 'etcd cluster "{{ $labels.job }}": {{ $value }} leader changes
           within the last 15 minutes. Frequent elections may be a sign of insufficient
           resources, high network latency, or disruptions by other components and
           should be investigated.'
         summary: etcd cluster has high number of leader changes.
-      expr: increase((max without (instance) (etcd_server_leader_changes_seen_total{job=~".*etcd.*"})
+      expr: increase((max without (instance, pod) (etcd_server_leader_changes_seen_total{job=~".*etcd.*"})
         or 0*absent(etcd_server_leader_changes_seen_total{job=~".*etcd.*"}))[15m:1m])
         >= 4
       for: 5m
       labels:
         severity: warning
     - alert: etcdHighNumberOfFailedGRPCRequests