Closed MassimoVlacancich closed 3 months ago
Hi team, could I seek your help on the above please? Happy to provide more details if required :)
Hi all, just chasing again, we are keen to rely on Gemini :)
Hi all, chasing again, would appreciate some help on this one :)
What happened?
After installing Gemini on my cluster and following the provided instructions to backup a Volume, I was unable to re-instate an older snapshot.
@
What did you expect to happen?
We let Gemini create a first snapshot. We then wrote some data by hand at the mount point of the volume being backed-up (details below) We let Gemini create a second snapshot. We then followed to below commands to backup to the first snapshot, where the file should not be present.
But despite this, when navigating to the mountpoint within the postgres pod which mounts the volume claim being backed-up, we still see the file. In short, the backup doesn't seem to be working as expected; the same applies when writing data within the DB which writes it to the mount point pgdata directory.
How can we reproduce this?
We are using k8s
1.25
and installed the latest version of Gemini with v2 CRDs (fyi, I don't think the CRD forv1beta1
exists athttps://raw.githubusercontent.com/FairwindsOps/gemini/main/pkg/types/snapshotgroup/v1beta1/crd-with-beta1.yaml
, we instead installed the one athttps://raw.githubusercontent.com/FairwindsOps/gemini/main/pkg/types/snapshotgroup/v1/crd-with-beta1.yaml
(/v1beta1 vs /v1)We are using a PVC to provide a mount point where our postgres-db can write its data, below is the config (simplified for convenience):
The persistent volume is only provisioned Dynamically by GCP once the volume claim manifest is applied and the DB mounts it:
We have specifically adjusted this to be a
standard-rwo
(ReadWriteOnce) as to ensure that the data on the volume isn’t being modified by multiple nodes at the same time when a snapshot is being taken.Moreover, we defined our snapshot class as follows before adding the snapshot config
The above works as expected once in place and we see the snapshots being created and in a
ready
stateAs detailed above, we then wrote some data by hand (added a file in the mount location
/var/lib/postgresql/data
between snapshots 1 and 2 above (1711982369
does not have the file, while1711982669
does).When then run the following commands to re-instore the first snapshot without a file
But despite this, when navigating to
/var/lib/postgresql/data
within the postgres pod which mounts the volume claim being backedup, we still see the file. What I also find interesting is that the PVC still shows its age to be 14d ago, I would expect this to be a brand new one re-instated from snapshot.When investigating the logs for the
gemini-controller
pod I don't see any specific errors after the restart post-annotation, nor do I see anything that points to the swap being successfull:We would appreciate your input in resolving this, maybe this has to do with our cluster set up or maybe a config issue with PVs; I've requested access to the slack channel, waiting on approval :)
Thanks in advance, Massimo
Version
Version 2.0 - Kubernetes 1.25
Search
Code of Conduct
Additional context
No response