oracle / coherence

Oracle Coherence Community Edition
https://coherence.community
Universal Permissive License v1.0
427 stars 70 forks source link

How to safely scale in storage enabled nodes in cloud environment? #64

Closed javafanboy closed 2 years ago

javafanboy commented 2 years ago

I am seeking general advice on how safely "scale in" storage enabled nodes in a cloud environment (in my case AWS)? I would like to use an auto scaling group (ASG) to maintain a (typically slowly varying) number of nodes to accomodate varying need of vCPUs over the day/week to preocess requests to a partitioned cache. One way would be to only allow the ASG to only scale in ONE node at a time and set a long delay before it is allowed to scale in one more. I would however prefer to instead of a fixed delay use the "lifecycle hook" support of the ASG where I can let a scaled in instance communicate to the ASG when it is ok to kill it (after it has performed an orgerly shutdown). My question is if there is a simple way for a storage enabled node to tell Coherence that it intends to leave the cluster that will also work if rebalancing is already in progress (for instance due to a node failure) and if so would this mechanism work even if allowing the ASG to scale in more than one node at the same time i.e. would the leaving nodes be allowed to do so in sequence?

thegridman commented 2 years ago

Are you running Coherence directly on the AWS VMs or are you running in some form of Kubernetes cluster? In Kubernetes we have the Coherence Operator which will safely manage scaling of clusters and can hook into the Kubernetes horizontal Pod autoscaler. If you are running Coherence directly on the AWS VMs then you need to manage scaling yourself.

When scaling down, a lot of care needs to be taken to ensure there is no data loss. When trying to hook into autoscalers, this can sometimes be difficult. For example a typical auto-scaler will just scale down by a specific number of processes, e.g. you have a cluster of 50 members, the autoscaler decides you only need 40 and kills 10. This is bad news for Coherence as that will almost certainly cause data loss. If you know your cache services are site safe, rack safe or machine safe, then you can kill all members in a given site, rack, or machine without data loss, but most autoscalers do not do this - and actually you probably don't want that either. When using Coherence in k8s, the autoscaler scales down by a given number, but the Coherence Operator actually controls the scale down under the covers and safely scales down, one member at at time, with safety/health checks between each member removal.

There is nothing in Coherence where a Coherence member can signal that it is shutting down. Even if it could, there would still need to be a number of checks in place to determine whether it is "safe" for that member to leave the cluster. Those checks depend on a number of things, some of which can be application specific.

In the next release of Coherence (both CE and commercial) due in June there is a health check API that allows applications to determine whether it is safe to remove a member. There is also a health http endpoint that can be used to check health. I have no idea how the AWS auto scaling group works or what checks it allows, but maybe it can hook into the new health check to control when it scales down.

javafanboy commented 2 years ago

No we are not seeing any value in our use-case for containerizing our Coherence cache nodes (in particular not with Kubernetes) so sadly we have to roll our own solution to this as you sugests. An API for health check sounds like a good idea - looking forward to when it is released! When only allowing auto scaling to remove ONE instance at a time the API sounds like it could work well for use in hooks that delay the node from leaving until the health status allow it...