compose / governor

Runners to orchestrate a high-availability PostgreSQL
MIT License
512 stars 75 forks source link

haproxy_status.sh should get leader status from etcd #2

Open Winslett opened 9 years ago

Winslett commented 9 years ago

Given etcd is the proper location for leaders/followers, haproxy_status.sh should respond after checking leader information from etcd instead of checking for leadership in PostgreSQL.

This will reduce the chance of writing data to a PostgreSQL that has lost its lock on the leader key, but has not failed over.

tvb commented 9 years ago

Hi @Winslett any plans to implement this any time soon?

Winslett commented 9 years ago

@tvb I'm indecisive how this should work.

My core issue with moving this way is solving the "what if the etcd cluster goes away?" problem. I need to create another issue for that problem, and probably reference this problem. If we relied on etcd state for leader / follower on haproxy_status.sh, and etcd had a maintenance window, crashed, or had a network partition, then the Postgres cluster would go down. With the current behavior, etcd going away would cause governor.py to throw an urllib error, which would stop PostgreSQL. In a perfect scenario, if etcd is unavailable to a running cluster, the cluster should maintain the current Primary if possible, but not failover. @jberkus and I chatted about this scenario. If etcd is unaccessible by the leader (network partition, etcd outage, or maintenance), a leader governor should expect a majority of follower governors to provide heartbeats to the leader. If follower heartbeats are not providing enough votes, the leader governor would go read-only and the cluster would wait for etcd to return. I would start the process by modifying the decision tree.

[update: created the issue at https://github.com/compose/governor/issues/7]

In the interim of solving that problem…

The more I think about this, the more I think governor.py should handle the state responses to haproxy. Thus, removing the haproxy_status.sh files and moving the HTTP port cofiguration to the postgres*.yml files.

For people who know Python better than I do, is there a sensible way to run governor.py with a looping runner and a HTTP listener?

tvb commented 9 years ago

In a perfect scenario, if etcd is unavailable to a running cluster, the cluster should maintain the current Primary if possible, but not failover

This is tricky as there would be no way for the primary to check its status.

jberkus commented 9 years ago

" If etcd is unaccessible by the leader (network partition, etcd outage, or maintenance), a leader governor should expect a majority of follower governors to provide heartbeats to the leader. If follower heartbeats are not providing enough votes, the leader governor would go read-only and the cluster would wait for etcd to return. I would start the process by modifying the decision tree."

My thinking was this:

The last reason is a good reason, IMHO, for HAProxy to be doing direct checks against each node as well as etcd, via this logic:

Is etcd responding?
    Is node marked leader in etcd?
        Is node responding?
            enable node
        else:
            disable node
    else:
        disable node
Else:
    Is node responding?
        leave node enabled/disabled
    else:
        disable node

One problem with the above logic is that this doesn't support ever load-balancing connections to the read replica. However, that seems to be a limitation with any HAProxy-based design if we want automated connection switching, due to an inability to add new backends to HAproxy without restarting. FYI, I plan to instead use Kubernetes networking to handle the load-balancing case.

One thing I don't understand is why we need to have an HTTP daemon for HAProxy auth. Isn't there some way it can check the postgres port? I'm pretty sure there is something for HAProxy; we really want a check based on pg_isready. This is a serious issue if you want to use Postgres in containers, because we really don't want a container listening on two ports.

Also, if we can do the check via the postgres port, then we can implement whatever logic we want on the backend, including checks against etcd and internal postgres status.

Parathetically: At first, the idea of implementing a custom worker for Postgres which implements the leader election portion of RAFT is appealing. However, this does not work with binary replication, because without etcd we have nowhere to store status information. And if we're using etcd anyway, we might as well rely on it as a source of truth. Therefore: let's keep governor/etcd.

bjoernbessert commented 9 years ago

@jberkus "One problem with the above logic is that this doesn't support ever load-balancing connections to the read replica. However, that seems to be a limitation with any HAProxy-based design if we want automated connection switching, due to an inability to add new backends to HAproxy without restarting. FYI, I plan to instead use Kubernetes networking to handle the load-balancing case"

You can add new backends (modify HAProxy config) with zero-downtime. Reload HAProxy with a little bit of help from iptables. We're using this with great success: https://medium.com/@Drew_Stokes/actual-zero-downtime-with-haproxy-18318578fde6

jberkus commented 9 years ago

Still seems like a heavy-duty work-around to do something which Kubernetes does as a built-in feature.