Closed FarhanSajid1 closed 4 years ago
@FarhanSajid1 If you're encountering an issue please open a dedicated issue with a way to reproduce it (or better a test case) before opening a PR so we'll be able to understand and find a fix. I don't think that there's the need to add a flag to fix it.
@sgotti we've recently had some users fully stop (all at once) a cluster of 3 nodes. on startup (sometimes), the cluster comes up, goes healthy, but then the old master is re-elected and this setting prevents it from becoming the cluster master (or at least blocks our postgres connection into it). A subsequent kick fixes it and it re-renders the config.
We'll get the issue opened up.
@FarhanSajid1 If you're encountering an issue please open a dedicated issue with a way to reproduce it (or better a test case) before opening a PR so we'll be able to understand and find a fix. I don't think that there's the need to add a flag to fix it.
@sgotti Related issue: https://github.com/sorintlab/stolon/issues/792
Closing since this isn't really needed.
Changes:
Reasons: Currently in our setup, there is synchronous replication which means when the cluster is brought down stolon doesn't regenerate the custom
pgHBA
files from thepostgresql.json
file. This could potentially lead to missing auth entries in thepg_hba.conf
file, and lead to a situation where the cluster never comes up healthy Example:no pg_hba.conf entry for host \"127.0.0.1\", user \"ns1\", database \"ns1\", SSL off
There should be a flag to trigger the generation of this file in this situation to on/off, but please let us know if that doesn't make sense or there's a flaw in that logic that would result in unwanted behavior.