apache / pekko

Build highly concurrent, distributed, and resilient message-driven applications using Java/Scala
https://pekko.apache.org/
Apache License 2.0
1.17k stars 139 forks source link

Clustering issues leading to all nodes being downed #578

Open fredfp opened 1 year ago

fredfp commented 1 year ago

I'm reopening here an issue that I reported at the time under the akka repo.

We had a case where an issue on a single node lead to the whole akka-cluster being taken down.

Here's a summary of what happened:

  1. Healthy cluster made of 20ish nodes, running on k8s
  2. Node A: encounters issues, triggers CoordinatedShutdown
  3. Node A: experiences high CPU usage, maybe GC pause
  4. Node A: sees B as unreachable, broadcasts it (B is certainly reachable, but detected as such because of high CPU usage, GC pause, or similar issues)
  5. Cluster state: A Leaving, B seen unreachable by A, all the other nodes are Up
  6. Leader can currently not perform its duties (remove A), reachability status (B seen unreachable by A)
  7. Node A: times out some coordinated shutdown phases. Hypothesis: timed out because leader could not remove A.
  8. Node A: finishes coordinated shutdown nonetheless.
  9. hypothesis - Node A: quarantined associations to other cluster nodes
  10. Nodes B, C, D, E: SBR took decision DownSelfQuarantinedByRemote and is downing [...] including myself
  11. hypothesis - Node B, C, D, E: quarantined associations to other cluster nodes
  12. in a few steps, all remaining cluster nodes down themselves: SBR took decision DownSelfQuarantinedByRemote
  13. the whole cluster is down

Discussions, potential issues:

Considering the behaviour of CoordinatedShutdown (phases can time out and shutdown continues), shouldn't the leader ignore unreachabilities added by a Leaving node and be allowed to perform its duties? At step 6 above, the Leader was blocked from removing A, but A still continued its shutdown process. The catastrophic ending could have been stopped here.

DownSelfQuarantinedByRemote: @patriknw 's comment seems spot on. At step 9, nodes B, C, D, E should probably not take into account the Quarantined from a node that is Leaving.

DownSelfQuarantinedByRemote: another case where Patrik's comment also seems to apply, Quarantined from nodes downing themselves because of DownSelfQuarantinedByRemote should probably not be taken into account.

At steps 10 and 12. Any cluster singletons running on affected nodes wouldn't be gracefully shutdown using the configured termination message. This is probably the right thing to do but I'm adding this note here nonetheless.

fredfp commented 1 year ago

I have extra logs that may be useful:

Remote ActorSystem must be restarted to recover from this situation. Reason: Cluster member removed, previous status [Down]

zhenggexia commented 3 months ago

I also encountered the same problem, which caused my cluster to keep restarting. Is there a plan to fix it? When is it expected to be repaired?

pjfanning commented 3 months ago

@fredfp Can you give us more info on this - https://github.com/akka/akka/issues/31095#issuecomment-1682261286

On the Apache Pekko side, we can read the Akka issues but not the Akka PRs (due to the Akka license not being compatible with Apache Pekko).

The issue appears to be with split brain scenarios from my reading of https://github.com/akka/akka/issues/31095 - specifically DownSelfQuarantinedByRemote events. Is it possible that we should just ignore DownSelfQuarantinedByRemote events when it comes to deciding to shut down the cluster?

fredfp commented 3 months ago

@pjfanning I think the issue can happen when a node shutsdown during a partition.

Still, DownSelfQuarantinedByRemote events cannot be ignored. The root cause is that nodes should not know they were quanrantined by others in some harmless cases.

Indeed, some quarantines are harmless (as indicated by the method argument: https://github.com/apache/pekko/blob/main/remote/src/main/scala/org/apache/pekko/remote/artery/Association.scala#L534). And the issue is that such harmless quarantine should not be be communicated to the other side i.e., the quarantined association. However, they currently always are: https://github.com/apache/pekko/blob/main/remote/src/main/scala/org/apache/pekko/remote/artery/InboundQuarantineCheck.scala#L47

zhenggexia commented 3 months ago

@pjfanning Is there a repair plan for this issue? When is it expected to be repaired?

CruelSummerday commented 3 months ago

I also experienced the same issue, leading to continuous restarts of my cluster. Is there a scheduled resolution for this? When can we anticipate a fix?

ZDevouring commented 3 months ago

@pjfanning Can you suggest a way to fix this bug as soon as possible, thank you very much.

fredfp commented 3 months ago

This bug should hit quite seldom, if it happens often it most likely means something is not right with your cluster and you should fix that first in all cases. Especially, make sure:

mmatloka commented 3 months ago

This bug should hit quite seldom, if it happens often it most likely means something is not right with your cluster and you should fix that first in all cases. Especially, make sure:

  • there's always available CPU for the cluster managment duties
  • not to use pekko's internal thread pool for your own workloads
  • make rolling update slower so that cluster is less unstable during rolling updates.

The issue appear also in systems with heavy memory usage and long GC pauses. It is worth to check gc strategy, gc settings, gc metrics etc

He-Pin commented 3 months ago

how about use the classical transport for now? seems the issue in lives in artery only

zhenggexia commented 3 months ago

how about use the classical transport for now? seems the issue in lives in artery only

  1. Running Akka 2.8.5 earlier on k8s resulted in a single node restart leading to cluster down (high memory and CPU)
  2. The above issues did not occur when running Akka 2.8.5 on the k8s cluster
  3. The above issues did not occur when using Akka to access the Nacos registration cluster
  4. Running Pekko 1.0.2 on k8s resulted in a single node restart causing cluster down
He-Pin commented 3 months ago

IIRC, Akka 2.8.x requires an BSL :) I don't have an env to reproduce the problem, maybe you can work out a multi-jvm test for that? and still super busy at work:(

zhenggexia commented 3 months ago

目前我的k8s集群有26个pod运行,当其中某一个pod因为资源不足重启的时候,常常会导致整个集群挂调,我们处理数据量比较大,资源占用比较高,目前在其他集群上(比如docker运行注册到nacos上),暂时没有出现这个问题

zhenggexia commented 1 month ago

Hello, has there been any progress on this issue? Is there a plan for when it will be fixed?😀

pjfanning commented 4 weeks ago

For Kubernetes users, we would suggest using the Kubernetes Lease described here: https://pekko.apache.org/docs/pekko/current/split-brain-resolver.html#lease

Pekko Management 1.1.0-M1 has a 2nd implementation of the Lease - the legacy one is CRD based while the new one uses Kubernetes native leases. https://github.com/apache/pekko-management/pull/218

fredfp commented 4 weeks ago

For Kubernetes users, we would suggest using the Kubernetes Lease described here: https://pekko.apache.org/docs/pekko/current/split-brain-resolver.html#lease

That's what we use already and it didn't help in the current case. Do you expect it resolves (or avoids) this issue? I think the lease helps the surviving partition confirm it can indeed stay up, it hoever doesn't help the nodes downing themselves, which is the observed behaviour described above.

Pekko Management 1.1.0-M1 has a 2nd implementation of the Lease - the legacy one is CRD based while the new one uses Kubernetes native leases. apache/pekko-management#218

Thank you for pointing it out, looking forward to it!

pjfanning commented 4 weeks ago

@fredfp It's good to hear that using the Split Brain Resolver with a Kubernetes Lease stops all the nodes from downing themselves. When you lose some of the nodes, are you finding that you have to manually restart them or can Kubernetes handle automatically restarting them using liveness and/or readiness probes?

fredfp commented 4 weeks ago

@fredfp It's good to hear that using the Split Brain Resolver with a Kubernetes Lease stops all the nodes from downing themselves.

Sorry, let me be clearer: using the SBR with a Kubernetes Lease does not stop all the nodes from downing themselves.

When you lose some of the nodes, are you finding that you have to manually restart them or can Kubernetes handle automatically restarting them using liveness and/or readiness probes?

When a node downs itself, the java process (running inside the container) terminates. The container is then restarted by k8s as usual, the liveness/readiness probes do not play a part in that. Does that answer your question?