Open Winslett opened 9 years ago
Hi @Winslett any plans to implement this any time soon?
@tvb I'm indecisive how this should work.
My core issue with moving this way is solving the "what if the etcd cluster goes away?" problem. I need to create another issue for that problem, and probably reference this problem. If we relied on etcd
state for leader / follower on haproxy_status.sh
, and etcd had a maintenance window, crashed, or had a network partition, then the Postgres cluster would go down. With the current behavior, etcd going away would cause governor.py
to throw an urllib
error, which would stop PostgreSQL. In a perfect scenario, if etcd
is unavailable to a running cluster, the cluster should maintain the current Primary if possible, but not failover. @jberkus and I chatted about this scenario. If etcd is unaccessible by the leader (network partition, etcd outage, or maintenance), a leader governor should expect a majority of follower governors to provide heartbeats to the leader. If follower heartbeats are not providing enough votes, the leader governor would go read-only and the cluster would wait for etcd to return. I would start the process by modifying the decision tree.
[update: created the issue at https://github.com/compose/governor/issues/7]
In the interim of solving that problem…
The more I think about this, the more I think governor.py
should handle the state responses to haproxy. Thus, removing the haproxy_status.sh
files and moving the HTTP port cofiguration to the postgres*.yml
files.
For people who know Python better than I do, is there a sensible way to run governor.py
with a looping runner and a HTTP listener?
In a perfect scenario, if etcd is unavailable to a running cluster, the cluster should maintain the current Primary if possible, but not failover
This is tricky as there would be no way for the primary to check its status.
" If etcd is unaccessible by the leader (network partition, etcd outage, or maintenance), a leader governor should expect a majority of follower governors to provide heartbeats to the leader. If follower heartbeats are not providing enough votes, the leader governor would go read-only and the cluster would wait for etcd to return. I would start the process by modifying the decision tree."
My thinking was this:
The last reason is a good reason, IMHO, for HAProxy to be doing direct checks against each node as well as etcd, via this logic:
Is etcd responding?
Is node marked leader in etcd?
Is node responding?
enable node
else:
disable node
else:
disable node
Else:
Is node responding?
leave node enabled/disabled
else:
disable node
One problem with the above logic is that this doesn't support ever load-balancing connections to the read replica. However, that seems to be a limitation with any HAProxy-based design if we want automated connection switching, due to an inability to add new backends to HAproxy without restarting. FYI, I plan to instead use Kubernetes networking to handle the load-balancing case.
One thing I don't understand is why we need to have an HTTP daemon for HAProxy auth. Isn't there some way it can check the postgres port? I'm pretty sure there is something for HAProxy; we really want a check based on pg_isready. This is a serious issue if you want to use Postgres in containers, because we really don't want a container listening on two ports.
Also, if we can do the check via the postgres port, then we can implement whatever logic we want on the backend, including checks against etcd and internal postgres status.
Parathetically: At first, the idea of implementing a custom worker for Postgres which implements the leader election portion of RAFT is appealing. However, this does not work with binary replication, because without etcd we have nowhere to store status information. And if we're using etcd anyway, we might as well rely on it as a source of truth. Therefore: let's keep governor/etcd.
@jberkus "One problem with the above logic is that this doesn't support ever load-balancing connections to the read replica. However, that seems to be a limitation with any HAProxy-based design if we want automated connection switching, due to an inability to add new backends to HAproxy without restarting. FYI, I plan to instead use Kubernetes networking to handle the load-balancing case"
You can add new backends (modify HAProxy config) with zero-downtime. Reload HAProxy with a little bit of help from iptables. We're using this with great success: https://medium.com/@Drew_Stokes/actual-zero-downtime-with-haproxy-18318578fde6
Still seems like a heavy-duty work-around to do something which Kubernetes does as a built-in feature.
Given
etcd
is the proper location for leaders/followers,haproxy_status.sh
should respond after checking leader information from etcd instead of checking for leadership in PostgreSQL.This will reduce the chance of writing data to a PostgreSQL that has lost its lock on the leader key, but has not failed over.