Closed saj closed 10 years ago
We also had at some point a similar issue, where minimum_master_nodes did not prevent the cluster from having two different views of the nodes at the same time.
As our indices were created automatically, some of the indices were created twice, once in every half of the cluster with the two masters broadcasting different states, and after a full cluster restart some shards were unable to be allocated, as the state has been mixed up. This was on 0.17. so I am not sure, if data would still be lost, as the state is now saved with the shards. But the other question is what happens when an index exists twice in the cluster (as it has been created on every master).
I think we should have a method to recover from such a situation. As I don't know how the zen discovery works exactly, I can not say how to solve it, but IMHO a node should only be in one cluster, in your second image node 1 should either be with 2, preventing 3 from becoming master, or with node 3, preventing 2 from staying master.
see Issue #2117 as well, I'm not sure if the Unicast discovery is making it worse for you, but I think we captured the underlying problem over on that issue, but would like your thoughts too.
From #2117:
The split brain occurs if the nodeId(UUID) of the disconnected node is such that the disconnected node picks itself as the next logical master while pinging the other nodes(NodeFaultDetection).
Ditto.
The split brain only occurs on the second time that the node is disconnected/isolated.
I see a split on the first partial isolation. To me, these bug reports look like two different problems.
I believe I ran into this issue yesterday in a 3 node cluster- a node elects itself master when the current master is disconnected from it. The remaining partipant node toggles between having the other nodes as its master before settling on one. Is this what you saw @saj?
Yes, @trollybaz.
I ended up working around the problem (in testing) by using elasticsearch-zookeeper in place of Zen discovery. We already had reliable Zookeeper infrastructure up for other applications, so this approach made a whole lot of sense to me. I was unable to reproduce the problem with the Zookeeper discovery module.
I'm pretty sure we're suffering from this in certain situations, and I don't think that it's limited to unicast discovery.
We've had some bad networking, some Virtual Machine stalls (result of SAN issues, or VMWare doing weird stuff), or even heavy GC activity can cause enough pauses for aspects of the split brain to occur.
We were originally running pre-0.19.5 which contained an important fix for an edge case I thought we were suffering from, but since moving to 0.19.10 we've had at least one split brain (VMware->SAN related) that caused 1 of the 3 ES nodes to lose touch with the master, and declare itself master, while still then maintaing links back to other nodes.
I'm going to be tweaking our ES logging config to output DEBUG level discovery to a separate file so that I can properly trace these cases, but there have just been too many of these not to consider ES not handling these adversarial environment cases.
I believe #2117 is still an issue and is an interesting edge case, but I think this issue here best represents the majority of the issues people are having. My gut/intuition seems to indicate that the probability of this issue occurring does drop with a larger cluster, so the 3-node, minimum_master_node=2 is the most prevalent case.
It seems like when the 'split brain' new master connects to it's known child nodes, any node that already has an upstream connection to an existing master probably should be flagging it as a problem, and telling the newly connected master node "hey, I don't think you fully understand the cluster situation".
I believe there are two issues at hand. One being the possible culprits for a node being disconnected from the cluster: network issues, large GC, discover bug, etc... The other issue, and the more important one IMHO, is the failure in the master election process to detect that a node belongs to two separate clusters (with different masters). Clusters should embrace node failures for whatever reason, but master election needs to be rock solid. Tough problem in systems without an authoritative process such as ZooKeeper.
To add more data to the issue: I have seen the issue on two different 0.20RC1 clusters. One having eight nodes, the other with four.
I'm not sure the former is really something ES should be actively dealing with, the latter I agree, and is the main point here, in how ES detects and recovers from cases where 2 masters have been elected.
There was supposed to have been some code in, I think, 0.19.5 that 'recovers' from this state by choosing the side that has the most recent ClusterStatus object (see Issue #2042) , but it doesn't appear in practice to be working as expected, because we get these child nodes accepting connections from multiple masters.
I think gathering the discovery-level DEBUG logging from the multiple nodes and presenting it here is the only way to get further traction on this case.
It's possible going through the steps in Issue #2117 may uncover edge cases related to this one (even though the source conditions are different); at least it might be a reproducible case to explore.
@s1monw nudge - have you had a chance to look into #2117 at all... ? :)
Paul, I agree that the former is not something to focus on. Should have stated that. :) The beauty of many of the new big data systems is that they embrace failure. Nodes will come and go, either due to errors or just simple maintenance. #2117 might have a different source condition, but the recovery process after the fact should be identical.
I have enabled DEBUG logging at the discovery level and I can pinpoint when a node has left/joined a cluster, but I still have no insights on the election process.
suffered from this the other day when an accidental provisioning error had a 4GB ES Heap instance running on a 4GB O/S memory, which was always going to end up in trouble. The node swapped, process hung, and the intersection issue described here happened.
Yes, the provisioning error could have been avoided, yes, probably use of mlockall may have prevented the destined-to-die-a-horrible-swap-death, but there's other scenarios that could cause a hung process (bad I/O causing stalls for example) where the way ES handles the cluster state is poor, and leads to this problem.
we hope very much someone is looking hard into ways to make ES a bit more resilient when facing these situations to improve data integrity... (goes on bended knees while pleading)
Btw. why not adopt ZK, which I believe would make this situation impossible(?)? I don't love the extra process/management that the use of ZK would imply..... though maybe it could be embedded, like in SolrCloud, to work around that?
From my understanding, the single embedded Zookeeper model is not ideal for production and that a full Zookeeper cluster is preferred. Never tried myself, so I cannot personally comment.
FYI - there is a zookeeper plugin for ES
Oh, I didn't mean to imply a single embedded ZK. I meant N of them in different ES processes. Right Simon, there is the plugin, but I suspect people are afraid of using it because it's not clear if it's 100% maintained, if it works with the latest ES and such. So my Q is really about adopting something like that and supporting it officially. Is that a possibility?
@otisg: The problem with the ZK plugin is that with clients being part of the cluster, they need to know about ZK in order to be able to discover the servers in the cluster. Some client libraries (such as the one used by the application that started this bug report -- I'm a colleague of Saj's) doesn't support ZK discovery. In order for ZK to be a useful alternative in general, there either needs to be universal support of ZK in client libraries, or a backwards-compatible way for non-ZK-aware client libraries to discover the servers (perhaps a ZK-to-Zen translator or something... I don't know, I've got bugger-all knowledge of how ES actually works under the hood).
We've gotten into this situation twice now in our QA environment. 3 nodes. minimum_master_nodes = 2. Log flies at https://gist.github.com/aochsner/5749640 (sorry they are big and repetitive).
We are on 0.9.0 and using multicast
As a bit of a walkthrough. sthapqa02 was the master and all it noticed was that sthapqa01 went bye bye and never rejoined. According to sthapqa02, the cluster was sthapqa02 (itself) and sthapqa03.
sthapqa01 is what appeared to have problems. It couldn't reach sthapqa02 and decided to create a cluster between itself and sthapqa03.
sthapqa03 went along w/ sthapqa01 to create a cluster and didn't notify sthapqa02.
So 01 and 03 are in a cluster and 02 thinks it's in a cluster w/ 03.
just an update that this behaves much better in 0.90.3 with dedicated master nodes deployment, but we are working on a better implementation down the road (with potential constraints on requiring fixed dedicated master nodes by the nature of some consensus algo impls, we will see how it goes...).
@kimchy that sounds promising, I would love to to understand more of the changes in that 0.90.x series that is in this area to understand what movements are going on ? Is there a commit hash you could point to that you can remember that I could peek at ?
By dedicated master node, do you mean nodes that just perform the master role, and not data role? (so additional nodes on top of existing data nodes). This would sort of mimic how adding Zookeeper as a Master Election co-ordinator works?
@kimchy Does 0.90.2 has the same features or they are only available in 0.90.3?
Shay, thanks for the update.
For us, the problem has gone away with the adoption of 0.90.2. The actual underlying problem might not have been fixed, but the improved memory usage with elasticsearch 0.90/Lucene 4 has eliminated large GCs, which probably were the root cause of our disconnections. No disconnections means no need to elect another master.
This situation happened to us recently running 0.90.1 with minimum_master_nodes
set to N/2 + 1
, with N = 15
. I'm not sure what the root cause was, but this shows that such a scenario is probable in larger clusters as well.
We have been frequently experiencing this 'mix brain' issue in several of our clusters - up to 3 or 4 times a week. We have always had dedicated master eligible nodes (i.e. master=true, data=false), correctly configured minimum_master_nodes and have recently moved to 0.90.3, and seen no improvement in the situation.
As a side note, the initial cause of the disruption to our cluster is 'something' to do with the network links between the nodes I imagine - one of the master eligible nodes occasionally loses connectivity with the master node briefly - "transport disconnected (with verified connect)" is all we get in the logs. We haven't figured out this issue yet (something is killing the tcp connection?), but this explains the frequency with which we are affected by this bug as it seems its a double hit due to the inability for the cluster to recover itself correctly when this disconnect occurs.
@kimchy Is there any latest status on the 'better implementation down the road' and when it might be delivered?
Sounds like zookeeper is our reluctant interim solution.
just as I was beginning plans to go to a set of dedicated master-only nodes I ready @trevorreeves post where he's still hitting the same problem. Doh!
Our situation appears to be IOWait related, in that a master node (also a data-node) hits an issue that causes extensive IOWait (a _scroll based search can trigger this, we already cap the # streams and Mb/second recovery rate through settings), the JVM becomes unresponsive. The other nodes that are doing the Master Fault Detection are configured with 3 x 30 second ping timeouts, all of which fail, and then they give up on the master.
I'm not really sure what is stalling the master node JVM, particularly when I'm positive it's not GC related, it's definitely linked to heavy IOWait. We have one node in one installation with a 'tenuous' connection to a NetApp storage backing the volume used by the ES local disk image, and that seems to be the underlying root of our issues, but it is the way the ES cluster is failing to recover from this situation and not properly reestabling a consensus on the cluster that causes issues (I don't mind any weirdness during times of whacky IO patterns that form the split brain so much as I dislike the way ES is failing to keep track of who thinks who's who in the cluster).
At this point, it does seem like the Zookeeper based discovery/cluster management plugin is the most reliable way, though I'm not looking forward to setting up that up to be honest.
We haven't hit this but this report is worrying - is this being worked on? This is the kind of thing that'd make us switch to Zookeeper.
Just wanted to point out to Nik a comment in the other related issue: https://github.com/elasticsearch/elasticsearch/issues/2117#issuecomment-16078340
"Unfortunately, this situation can in-fact occur with zen discovery at this point. We are working on a fix for this issue which might take a bit until we have something that can bring a solid solution for this."
I wonder what has happened since then and if their findings correspond to my scenario.
For my clusters, split-brains always occur when a node becomes isolated and then elects themselves as master. More visibility (logging) of the election process would be helpful. Re-discovery would be helpful as well since I rarely see the cluster self heal despite being in erroneous situations (nodes belongs to two clusters_. I am on version 0.90.2, so I am not sure if I am perhaps missing a critical update although I do scan the issues and commits.
Could you do me a huge favor and not patch this until, like, May or so? I need to finish some other things before the next installation of Jepsen. ;-)
Is there any update on this or timeline for when it will be fixed?
Ran into this very problem on a 4 node cluster.
Node 1 and Node 2 got disconnected and elected themselves as masters, Node 3 and 4 remained followers for both Node 1 and Node 2.
We do not have the option of running ZK.
Does anyone know the election process is governed (I know it runs off the Praxos Consensus algorithm) but in layman's term does each follower vote exactly once or do they case multiple votes?
We just ran into this problem on a 41 data node and 5 master node cluster running 0.90.9 @kimchy is your recommendation to use zookeeper and not zen?
@amitelad7 You have a few options running at Zen, you can increases the fd timeouts/retries/intervals if your network/node is unresponsive. The other option is to explicitly define master nodes, but in the case of yours where you have 5 masters it may get tricky.
We experienced this problem in our test environment because of tcp connections (heartbeat?) getting dropped by a firewall after some time leading to the "transport disconnected (with verified connect)" error which results in a split brain as described in this issue.
I configured the "net.ipv4.tcp_keepalive_time" variable in the /etc/sysctl.conf to a lower value (e.g. 600 equals 10 minutes) which fixed the problem for us. No disconnects, no new master election, no split brain.
But giving my +1 for this issue to get fixed asap as it could still occur.
:+1:
Out of interest are you all running ES on EC2?
we're running on a private cloud of our own
@amitelad7 oh man >< Even worse.
41 nodes? Crazy. Did you try lowering the TCP keepalive setting like @mycrEEpy mentioned?
it's actually been quite stable over the past few weeks so we havent worked on further optimizations :)
@amitelad7 what does "quite stable" mean? :)
We are also running on a private cloud.
Part of our problem was incorrect Elasticsearch documentation. The docs listed the default ping timeout as 2s, so in an effort to improve the cluster, we raised the value to 5s. In reality the default is 30s, so I was actually lowering the value. The documentation is now fixed. We are now more resilient to network failures.
@aphyr did you do any analysis using Jepsen on Elasticsearch?
Still pending. Been a bit overwhelmed.
I can confirm that partitions with nodes that can see both sides of the cluster reliably induce ElasticSearch split brain after about a hundred seconds. A bunch of ES guarantees seem to go out the window at that point; for instance, conditional puts can succeed against both primaries, leading to independent version histories and the loss of some or all conflicting updates to a key.
Here's a log showing the full invocation/completion history for five clients (one for each of five nodes) performing CaS operations on a single document via conditional update with version. https://gist.github.com/aphyr/10565113.
In this test, roughly a third of all writes are lost--many failed or were indeterminate due to the initial cluster transition. In the limit as t->infinity, the lost write fraction converges to 1/2.
It's actually much worse than I realized. Because ElasticSearch allocates IDs sequentially instead of using k-ordered flake IDs, any split-brain scenario guarantees that two inserts on different primaries will use the same ID--and when merged, one document silently clobbers the other. In this short test where the set is built by inserting one document per integer, with ES-assigned IDs, the cluster drops about a third of all documents inserted. The lost fraction converges to 1/2 as the duration of the split-brain rises.
@aphyr thanks for running it!, we started a couple of weeks ago a branch to try and address some of these problems here: https://github.com/elasticsearch/elasticsearch/tree/feature/improve_zen (at least in the context of zen), but by far the work has just started. The good news is that we now have a test infra support to simulate these problems in ES (by having simulated transport/network later).
I see you pushed your Jepsen work on Elasticsearch, so we will make sure to run it as well, thanks!
@kimchy perhaps jepsen should just be integrated into your test suite or ran as part of your CI server's tests. I'm not sure I can find a decent reason to replicate these kinds of tests.
@AeroNotix the benefit of doing similar tests in our test infrastructure is how simple they are to write and run to verify behavior. Check this test for example: https://github.com/elasticsearch/elasticsearch/blob/master/src/test/java/org/elasticsearch/discovery/DiscoveryWithNetworkFailuresTests.java, this is a simple test, easy to run using our integration tests, without needing to setup Jespen or external dependencies. By having such a test, every time you run our test suite, those are run as well, without needing to have a more complex setup.
Having said that, writing respective tests that simulate certain behaviors does not exclude running Jespen as well, which we plan to do.
If you're playing around at home, you can reproduce these results using
https://github.com/aphyr/jepsen/blob/master/src/jepsen/system/elasticsearch.clj https://github.com/aphyr/jepsen/blob/master/test/jepsen/system/elasticsearch_test.clj#L52-L84
And yeah, I'm glad to see virtualized networking as a part of the Elasticsearch test suite. Definitely faster, and lets you explore a broader space of failure modes than Jepsen. Jepsen can only treat these systems as black boxes, so there are all sorts of timing/stochastic bugs I can't reach easily.
@aphyr thanks!, we still have a way to go in terms of development on mentioned branch, but we will make sure to running the Jespen tests and analyzing them as well (though they do seem to trigger the behavior mentioned here, which we also managed to simulate using our (new) test infra) and report back!
Btw, your work is highly appreciated!, and I think you mentioned that you were looking for contributions back to your project, can you point me at the right place to do so?
Thanks for the kind words, @kimchy, and thanks for all your hard work on ElasticSearch as well!
Jepsen is starting to coalesce around a new set of testing primitives, but a lot of stuff is up in the air and there's basically no documentation at this point. When I get a little breathing room after this talk I'll be sure to put up a proper contributing guide and clean up the API a bit.
The biggest issue for me right now is the fact that Knossos doesn't have a good strategy for dealing with hung processes, which invoke a request but are unable to determine if it succeeded or failed. N hung processes multiply runtime by a factor of n!, so I'd really appreciate any help folks could give in working around that! Entry point to the linearizability checker is here: https://github.com/aphyr/knossos/blob/master/src/knossos/core.clj#L344-L358
@aphyr will check it out! (don't want to derail this thread, so mailed you about it)
G'day,
I'm using ElasticSearch 0.19.11 with the unicast Zen discovery protocol.
With this setup, I can easily split a 3-node cluster into two 'hemispheres' (continuing with the brain metaphor) with one node acting as a participant in both hemispheres. I believe this to be a significant problem, because now
minimum_master_nodes
is incapable of preventing certain split-brain scenarios.Here's what my 3-node test cluster looked like before I broke it:
Here's what the cluster looked like after simulating a communications failure between nodes (2) and (3):
Here's what seems to have happened immediately after the split:
zen-disco-node_failed
...reason failed to ping
)At this point, I can't say I know what to expect to find on node (1). If I query both masters for a list of nodes, I see node (1) in both clusters.
Let's look at
minimum_master_nodes
as it applies to this test cluster. Assume I had setminimum_master_nodes
to 2. Had node (3) been completely isolated from nodes (1) and (2), I would not have run into this problem. The left hemisphere would have enough nodes to satisfy the constraint; the right hemisphere would not. This would continue to work for larger clusters (with an appropriately larger value forminimum_master_nodes
).The problem with
minimum_master_nodes
is that it does not work when the split brains are intersecting, as in my example above. Even on a larger cluster of, say, 7 nodes withminimum_master_nodes
set to 4, all that needs to happen is for the 'right' two nodes to lose contact with one another (a master election has to take place) for the cluster to split.Is there anything that can be done to detect the intersecting split on node (1)?
Would #1057 help?
Am I missing something obvious? :)