Open Hashdhi opened 8 months ago
We have the same issue. This bug is very critical not just for scale up or down. When we need to redeploy the postgres cluster in case of any failure due to node issue, we can't reschedule the pod to run in another node with existing PVC because it keeps complaining PGDATA already exists
error.
I'm exactly facing this as I'm evaluating Cloudnative. Looks like a show stopper for now 😕. It can be easily replicated:
We have the same issue. This bug is very critical not just for scale up or down. When we need to redeploy the postgres cluster in case of any failure due to node issue, we can't reschedule the pod to run in another node with existing PVC because it keeps complaining
PGDATA already exists
error.
@gbartolini This really is an issue, why is the same cluster
object, when restored, not able to consume its PVC's anymore?
It looks like CNPG sets some sort of flag on a cluster object to indicate weither it needs to init or not.
This is not wanted behavior, at least not when it's not overridable. As it makes infrastructure as code a mess.
Another example: We need to reinstall/move some PVCs and with that specific platform, it's easier to just reinstall the helm chart and move the old PVC data to the new PVCs.
That works fine with literally every piece of software, except CNPG.
What would easily solve all of these issues is:
initdb.useExisting: true
That would skip the initdb steps if existing pgdata folder is found and instead tries to use the database in said folder. This should work 100% without any negative consequences.
You can even set it to false
by default, to ensure it doesn't cause any issues for existing users.
Is there any progress on this issue ?
@gbartolini or @Hashdhi , is this feature started development or onhold ? @dperetti did you manage to workaround this issue while waiting for a permanent solution ?
Hi all, i have the same issue, there is no solution to avoid initdb when we provision a new cluster with existing pv, and existing pgdata ?
Is there an existing issue already for this bug?
I have read the troubleshooting guide
I am running a supported version of CloudNativePG
Contact Details
selvarajchennappan@gmail.com
Version
1.22.0
What version of Kubernetes are you using?
1.28
What is your Kubernetes environment?
Self-managed: kind (evaluation)
How did you install the operator?
YAML manifest
What happened?
How to launch a cluster on existing pvc . We have scenario to scale down and scale up . initially had 3 replicas and scale down to 1 then scaled up to i.e instances : 3 and applied yaml . initdb failed . at the time of scale up it says that {"level":"info","ts":"2024-02-03T06:08:14Z","msg":"PGData already exists, can't overwrite","logging_pod":"srims-prod-1-initdb"} Error: PGData directories already exist
Cluster resource
Relevant log output
No response
Code of Conduct