carlos8f / haredis

High-availability redis in Node.js.
https://npmjs.org/package/haredis
154 stars 21 forks source link

minimum number of nodes in a cluster #12

Open sdarwin opened 11 years ago

sdarwin commented 11 years ago
  1. Why couldn't you have an haredis cluster with just 2 nodes? what are the dangers or problems?
  2. Some ramblings: In mongodb there are clusters that require 3 nodes minimum. But one node could be an "arbiter" that doesn't hold data. That means they are supporting a 2-node setup, in a certain way. With haredis, the client is the "arbiter".
carlos8f commented 11 years ago

With flexible-role replication, there are two main categories of availability I can think of:

  1. can a redis server connect to the majority of its peers?
  2. can the majority of clients connect to that server?

Hypothetically, if you take out the "majority" part, you're left with:

  1. can a redis server connect to any of its peers?
  2. can any of the clients connect to that server?

In the latter scenario it's possible, in the event of a network partition, that the server in question is shut off from all but one its peers and therefore elects that peer master without agreement of the majority of the cluster. In particular, if it was allowed to elect itself, it could do so without communication from any of the peers. It's this kind of condition that the "majority" rule is supposed to guard against -- the majority isn't technically necessary in haredis's case (not a formal vote-casting process like mongo's), but it provides assurance that when a server is elected master, there is confirmation from most of the peers. Anything less would be not very reliable.

sdarwin commented 11 years ago

I agree, the concept of "majority" is important. Here is an idea. Not sure if it makes sense, or it's worth doing, but I will write it: Allow an haredis client to be assigned the role of "arbiter". Only one client should be configured this way, and only if there are an even number of redis servers (especially the case of 2) , rather than an odd number. A majority is required for a failover. But the "arbiter" client is special, in that it counts like a redis server for voting/decision making. That means you could have 2 redis servers + the arbiter, for a total of 3. If one redis server fails, then the arbiter client is capable of promoting the other redis-server to master. That depends on that fact that both the arbiter and the remaining redis server both agree that it looks like a failover is needed. Both of them have lost contact with the previous master. So you'd have a 2 out of 3 vote, and the failover could take place. The rest of the haredis clients in the environment would not be capable of causing a failover. If they lost contact with the master, nothing would happen, they are just "down" or go "read-only". If there are 3 redis servers (or any odd number of redis servers) then there should not be an arbiter assigned. Keep everything as it is now, as the status quo. If there are an even number of redis servers, then create one arbiter, and use this method. This also depends on the idea that the arbiter haredis client is keeping in contact with the servers perpetually, rather than sporadically, such as when requests come in, not sure about that part.

carlos8f commented 11 years ago

Interesting idea, but keep in mind, haredis is not mongo. It's a client-driven failover system, which means the only "input" the servers have is reporting which server they see as master, and the clients attempt to solve the conflict, if there is more or less than 1 master reported. Servers don't participate in voting on a new master. Having a special "arbiter" client is not logical to me -- if the arbiter is necessary for failover to happen, or its network connection to the servers is relied on as the main method of monitoring, it becomes a central point of failure in itself.

Have you checked out Redis Sentinel yet? It might be more of what you're looking for. Sentinels act much like arbiters. Personally though I think they add a lot of complication to the deployment and can't automatically notify the node app of the new master, so I think haredis continues to have usefulness.

asilvas commented 11 years ago

If there are 3 hosts (not sure I understand that requirement, but I'll run with it), if server1 (master) goes down, server2 (slave) becomes master, server2 goes down, will server3 still take over as master? Or must there always be 2+ nodes online at all times?

I just want to make sure that 1 of 3 available will still function, as this scenario does happen (1 is out intentionally, 1 fails).

carlos8f commented 11 years ago

In your scenario, when servers 1 and 2 are unavailable, the haredis client will queue up its commands until a majority is up.

Keep in mind that whether or not a server is "up" is subjective to the client's connection: other haredis clients may see the 2 "downed" servers and therefore be more educated to elect a master (also, the clients do not connect to each other, so can't achieve a consensus that way). This is one of the trade-offs of using the client connection for failover rather than the replication connection between db servers; in the latter a server may be more objectively declared "down".

The reason 2 hosts don't work: with a cluster of 2, if client A can only see server 1 and client B can only see server 2, neither have enough info to elect a master on their own. Sure, it doesn't have to be this way (and feel free to fork this project!), but I think it's important considering that electing a master could result in the real master being wiped, if a client happens to only be able to connect to one slave.

asilvas commented 11 years ago

I'm fine with the logic of not being able to elect a new master due to minority, but not being able to use the last remaining slave, as a slave, seems a waste. Voids the point of having that much redundancy.