Open AurimasNav opened 3 weeks ago
hey @AurimasNav ControllerRevision for which service is it?
hey @AurimasNav ControllerRevision for which service is it?
it is named vmalertmanager-victoria-metrics-k8s-stack-c9f494cf7, so I guess for vmalert manager?
I should have also mentioned that I am using default values for install.
looks like it's an operator issue, fix is expected to be released today
I have patched operator deployment with image 0.47.2 it seems to have fixed pod
and controllerrevision
, but the pvc
is still affected.
alertmanager with default configuration in k8s stack uses emptydir
storage. does this happen when you recreate alertmanager?
Looks like an issue still exists for kubernetes clusters < 1.27. There will be patch release soon
Also, I was not able to reproduce any problem with PVC
alertmanager with default configuration in k8s stack uses
emptydir
storage. does this happen when you recreate alertmanager?
I have completely deleted my application and redeployed it with latest version 0.25.5, the pvc is still being created (by vm-operator?) and argocd tries to delete it as it is not in source.
these are the exact files I am using to deploy this: argocd application - https://github.com/AurimasNav/k8s-at-home/tree/main/argocd-apps/base/apps/victoria-metrics-k8s-stack helm via kustomize - https://github.com/AurimasNav/k8s-at-home/tree/main/gitops/victoria-metrics-k8s-stack
hey @AurimasNav was able to reproduce this issue and transfered it to operator repo, as it's related to how it manages pvc, which is managed by VMSingle
Fix will be included to the next release.
As per picture below, pvc, pod and controllerrevision are being deleted and re-created in an infinite loop.
argocd version: 2.12.2 k3s version: v1.30.4+k3s1
helm via kustomization.yaml
application.yaml