Open sahanasreenath opened 12 hours ago
Hi, the issue may not be directly related to the Bitnami container image/Helm chart, but rather to how the application is being utilized, configured in your specific environment, or tied to a particular scenario that is not easy to reproduce on our side.
If you think that's not the case and want to contribute a solution, we'd like to invite you to create a pull request. The Bitnami team is excited to review your submission and offer feedback. You can find the contributing guidelines here.
Your contribution will greatly benefit the community. Please feel free to contact us if you have any questions or need assistance.
Suppose you have any questions about the application, customizing its content, or technology and infrastructure usage. In that case, we highly recommend that you refer to the forums and user guides provided by the project responsible for the application or technology.
With that said, we'll keep this ticket open until the stale bot automatically closes it, in case someone from the community contributes valuable insights.
Name and Version
bitnami/postgresql-repmgr:12.20.0-debian-12-r29
What architecture are you using?
None
What steps will reproduce the bug?
In GKE, I have pg-pool with 1 replica and postgresql statefulset with 3 replicas.
here is my helm chart values
When
helm upgrade
happens where new pods come up and the old pod goes away when the new node is healthy. Helm upgrade just fails with an errorThe new node is not healthy
It took more than 1 hr and the non-running node vanished. A new node spun up which was healthy.
Postgres was up within 12 minutes of the
helm upgrade
but pgpool was unable to identify the primaryExpected:
I expect that during the
helm upgrade
if a new node gets spun up, it needs to pick the primary node running which it fails to currently. Old pgpool is running and healthy and able to connect to postgres primary where as the new node can't.What do you see instead?
Added in the above description