Closed sacoco closed 6 months ago
Yeah I would like to know that as well please
So I found the solution for the prometheus stack statefulset. You can either enable it in the values file prometheus.prometheusSpec.storageSpec
or provide the external config file. For instance my config file looks like this:
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
You can then reference this config file when installing or upgrading your helm chart like this
helm install -f prometheus-custom-values.yaml kube-prometheus-stack kube-prometheus-stack -n monitoring
Now I still have to figure out how to enable volume for the AlertManager.
Here is config for alertmanager volume:
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
storageClassName: longhorn-2
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
Thank you! I think this ticket can be marked as solved.
Did this create a PVC for you? I can't find any in my cluster after applying the prometheusSpec..
Hi I also dont see any PVC created
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: ceph-block
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Ti
and there is just one storageSpec: under the proper prometheusSpec:
thank you
Hi, when trying this way I have an error: failed to provision volume with StorageClass could not create volume in EC2: UnauthorizedOperation: You are not authorized to perform this operation . Directly, when I create a PVC I don't have this error?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Same here, no PVC created after adding the volumeClaimTemplate spec.
same from my side, I specify the following
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: default
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
and there is no pvc created after that
Try this
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: foo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
In my case, It works.
If you missed prometheus
the higher layer of prometheusSpec
, The helm chart's template will not make PVC. (prometheus.prometheusSpec.storageSpec.volumeClaimTemplate
)
Hmm strange
I have it like this and it does not . Do you think there could be conflicting statements in the [Other configs]?
prometheus:
enabled: true
................................ [Other configurations from values.yaml]
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: ceph-block
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Ti
selector:
matchLabels:
app: prometheus
Facing the same issue here (trying to use persistent volumes for prometheus / alertmanager)
I had the same issue. Found these two issues #563 and #655 and am now good.
I'm using kube-prometheus-stack-45.29.0 helm chart. and below are relevant part of my-values
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
metadata:
name: data
spec:
storageClassName: cstor-csi-disk
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
adding metadata name under volumeClaimTemplate: was needed for me becuase of the name too long issue/bug
volumeClaimTemplate:
metadata:
name: data
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
i have a suspicion:
I saw that this solution worked only on installing a chart. If you wanted to upgrade it was ignored. i guess the prometheus operator can not handle the migration of one storage (empty dir is the default i guess) to another one and therefore ignores it, because otherwise the data would be just lost.
I do not know if there is a flag to or so to force this change but that could be the solution?
Facing Same Issue
Why not made just a parameter persistent: true
in alertManager/pushgateway and promserver to simplify all this part ?
Why not made just a parameter
persistent: true
in alertManager/pushgateway and promserver to simplify all this part ?
Because Storage is a complex topic and there's no one size fits all like solution (for example Storage Classes and Disk Sizes).
This definitely looks like a bug. I tried installing 55.4.1 and the prometheus PVC would not get created no matter what I tried. I started successively taking lower releases (jumping several at a time), and it finally worked when I tried 48.5.0. So the bug was introduced somewhere between the two versions.
Thank you for the hint, tried with version 55.7.1, but no pvc where created where as with version 48.5.0 it worked.
Hello, I am also getting an error.
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
alertmanager: alertmanagerSpec: storage: volumeClaimTemplate: spec: storageClassName: longhorn-2 accessModes: ["ReadWriteOnce"] resources: requests: storage: 10Gi
pods and statefulset are in pending state ..no pv, pvc pending
Bunu dene
prometheus: prometheusSpec: storageSpec: volumeClaimTemplate: spec: storageClassName: foo accessModes: - ReadWriteOnce resources: requests: storage: 30Gi
Benim durumumda işe yarıyor.
prometheus
Eğer üst katmanını kaçırırsanızprometheusSpec
, dümen haritasının şablonu PVC olmayacaktır. (prometheus.prometheusSpec.storageSpec.volumeClaimTemplate
)
I tried your yaml but it says pvc pending..it didn't work
This is still in an issue in the latest version 56.16.0.
To be honest, i dont see the issue! Above comments tell you how to add storage. With that, its done..
If the PVCs are pending, they the issue belongs to your infrastructure which is not part of kube-prometheus-stack
Creating the pv
with a label
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-server
labels:
volumeIdentifier: prometheus-server
spec:
...
and using a selector
in the values.yaml
worked for me.
prometheus:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: "csi-cephfs-sc"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
selector:
matchLabels:
volumeIdentifier: prometheus-server
It was not neccessary with 48.5.0
, but now it works fine for me.
I upgraded the chart to 58.1.3 and the CRDs accordingly. It is working.
storageSpec:
volumeClaimTemplate:
spec:
storageClassName: xxxxx
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 500Gi
## Using tmpfs volume
##
# emptyDir: <-------------- comment out
# medium: Memory
For me, had everything correct. Had to comment out the emptyDir right below...
I have the same issue with the chart 61.7.1. If I create a PV manually, it is not used, and the DB is not persistent. It actually remains an emptyDir
in the definition...
Hi guys, You need to install ebs-csi addon before adding pvc for Prometheus.
Hi guys, You need to install ebs-csi addon before adding pvc for Prometheus.
Hi @marcoliew,
I am not sure I understand. My K8s cluster is hosted on a private cluster, and I am using local storage PV. Why would I need an EBS-CSI addon for it to work?
Thanks.
i have a suspicion:
I saw that this solution worked only on installing a chart. If you wanted to upgrade it was ignored. i guess the prometheus operator can not handle the migration of one storage (empty dir is the default i guess) to another one and therefore ignores it, because otherwise the data would be just lost.
I do not know if there is a flag to or so to force this change but that could be the solution?
I upgraded from v61 to v65 and successfully added volumes for both (Prometheus and Alertmanager) for the first time. Just with volumeClaimTemplate.
matchLabels: volumeIdentifier: prometheus-server
this solution works. Tested. create storage class then PV and use them in values.yaml file.
this solution works. Tested. create storage class then PV and use them in values.yaml file.
same here. I'm trying to mount a specific VolumeMount on my prometheus: /prometheus/snapshots
- I'm following this:
helm upgrade my-kube-prometheus-stack prometheus-community/kube-prometheus-stack -n monitoring -f values.yaml
My values.yaml
file:
prometheus:
prometheusSpec:
additionalVolumeMounts:
- name: prometheus-snapshots-volume
mountPath: /prometheus/snapshots
additionalVolumes:
- name: prometheus-snapshots-volume
persistentVolumeClaim:
claimName: prometheus-snapshots-pvc
It's not mounting:
kubectl describe pod prometheus-my-kube-prometheus-stack-prometheus-0 -n monitoring
Volumes:
config:
Type: Secret (a volume populated by a Secret)
SecretName: prometheus-my-kube-prometheus-stack-prometheus
Optional: false
tls-assets:
Type: Projected (a volume that contains injected data from multiple sources)
SecretName: prometheus-my-kube-prometheus-stack-prometheus-tls-assets-0
SecretOptionalName: <nil>
config-out:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
prometheus-my-kube-prometheus-stack-prometheus-rulefiles-0:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-my-kube-prometheus-stack-prometheus-rulefiles-0
Optional: false
web-config:
Type: Secret (a volume populated by a Secret)
SecretName: prometheus-my-kube-prometheus-stack-prometheus-web-config
Optional: false
prometheus-my-kube-prometheus-stack-prometheus-db:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-gjcdp:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
Any ideas? Any help will be appreciate!
Thank you!
same here. I'm trying to mount a specific VolumeMount on my prometheus: /prometheus/snapshots - I'm following this:
helm upgrade my-kube-prometheus-stack prometheus-community/kube-prometheus-stack -n monitoring -f values.yaml
My values.yaml file:
prometheus: prometheusSpec: additionalVolumeMounts:
- name: prometheus-snapshots-volume mountPath: /prometheus/snapshots additionalVolumes:
- name: prometheus-snapshots-volume persistentVolumeClaim: claimName: prometheus-snapshots-pvc
It's not mounting:
The fields have different names: volumes
and volumeMounts
, i.e.
prometheus:
prometheusSpec:
volumeMounts:
- name: prometheus-snapshots-volume
mountPath: /prometheus/snapshots
volumes:
- name: prometheus-snapshots-volume
persistentVolumeClaim:
claimName: prometheus-snapshots-pvc
@zeritti it worked man, I haven't notice that, thank you so much!
Hey fellows, i would like to use persistent volumes instead of emptyDir(by default) config, does anybody how to do that? i would really appreciate an example, getting confused with pv creation and also the pvc