Closed wyb1 closed 2 years ago
It is possible for a Prometheus
to use an arbitrary pvc
with
volumes:
- name: prometheus-prometheus-db
persistentVolumeClaim:
claimName: prometheus-db-prometheus-0
Edit: the prometheus-operator adds the name of the prometheus object as a suffix to the claimName
so this does not work.
It is not possible to overwrite the flag --storage.tsdb.path
which would be needed.
The prometheus operator automatically generates a volume
and volumeMount
for the the prometheus db. It is not possible to change this. The volume
and volumeMounts
fields only allow configuration of additional volumeMounts/volumes.
volumeMounts <[]Object>
VolumeMounts allows configuration of additional VolumeMounts on the output
StatefulSet definition. VolumeMounts specified will be appended to other
VolumeMounts in the prometheus container, that are generated as a result of
StorageSpec objects.
volumes <[]Object>
Volumes allows configuration of additional volumes on the output
StatefulSet definition. Volumes specified will be appended to other volumes
that are generated as a result of StorageSpec objects.
Idea:
It is possible to mount the same pvc that the old prometheus is using by setting the storage
field and "creating" a pvc that already exists:
storage:
volumeClaimTemplate:
apiVersion: v1
metadata:
labels:
app: prometheus
role: monitoring
name: prometheus-db-prometheus-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: default
volumeMode: Filesystem
This works, however the data is in the wrong location. So an init container can be used to move the data from /var/prometheus/data
to /prometheus
Edit: this does not work because the prometheus operator adds a suffix to the pvc
so it is not the same as the one that Gardener creates.
Migration works!
Steps:
pv
's persistentVolumeReclaimPolicy=Retain
Prometheus
object.
storage:
volumeClaimTemplate:
apiVersion: v1
metadata:
labels:
app: prometheus
role: monitoring
name: prometheus-db
spec:
volumeName: <existing-pv>
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: default
volumeMode: Filesystem
Prometheus will require an init container to migrate the data. The "old" prometheus had its data stored in the subPath=/prometheus-
. The new prometheus will store it in the subPath=/prometheus-db
. So if we mount the volume in a container with no subPath the structure will look like this:
/prometheus-/<old-prometheus-data>
/prometheus-db/<new-prometheus-data>
we must verify that there is no data in /prometheus-db
- this means migration has not happened. Otherwise we skip.
If there is no data then we move the contents of /prometheus-
to /prometheus-db
.
This can be defined in an initContainer
.
initContainers:
- name: alpine
image: alpine
command:
- sh
- -c
- <verify /prometheus/prometheus-db is empty and mv data from /prometheus/prometheus- /prometheus/prometheus-db>
volumeMounts:
- mountPath: /prometheus
name: prometheus-db
What would you like to be added:
Investigate if it is possible for a Prometheus managed by the
prometheus-operator
to "reuse" a PVC that previously belonged to a Prometheus not managed by the operator.Why is this needed:
Migration to the
prometheus-operator
.