Open akshayks opened 5 years ago
@akshayks
Thanks for using Escalator!
It sounds like Escalator's current method for selecting nodes to terminate for scale down (oldest first) isn't the most ideal for your use case. #105 definitely sounds like it could help, and a least utilised method for selecting nodes to terminate is probably the best option. We were a little hesitant to implement this as it can lead to situations where some nodes stay around for ever and aren't ever terminated because they are always being utilised.
I agree, the wording on that second statement is a little vague, this is probably better:
Remove any nodes that have already been tainted and have exceed the grace period and/or considered empty.
Have you considered https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler ? Escalator is very narrowly focused towards job based workloads that will start and always stop, and wasn't really designed with deployments/statefulsets in mind as you've noticed.
@awprice Appreciate your quick response.
As you can imagine, the cluster autoscaler was the first option we explored. However, there seem to be multiple caveats w.r.t auto scaling groups spanning multiple availability zones that reduced its appeal. Escalator was highly recommended as an alternative which lead us to evaluate and eventually adopt it for our use case.
Is #105 on your roadmap at this time? If not, I suppose we could look into contributing a PR with some guidance from the escalator team. This is rather critical for us and any visibility/assistance you can provide on this topic will be very useful to us.
Hi @akshayks
Yes unfortunately we also understand the caveats of cluster autoscaler. If you're having difficulties with StatefulSets and Volumes across multiple regions with cluster autoscaler, one thing you can do is split your nodegroup across separate ASGs (1 for each AZ). Here's an example of that..
However if Escalator is the way to go for your use case, we'd certainly be happy for you to contribute! We can provide some visibility and guidelines if you want to implement #105
Please make sure all the changes are backwards compatible with functionality and configuration and don't change the default behaviour. For example if you add a configuration option to choose between oldest
and least_used
, please make oldest
the default when the key isn't given.
Also as long as you contribute to the unit tests with the changed behaviour everything should be good.
sort.go is where the current selection ordering comes from. It implements the sort.Interface
which is used in taintOldestN when selecting the nodes to taint first and untaintNewestN for reversing that decision when needing to scale up by untainting tainted nodes first.
You could probably start by implementing your own sort.Interface
for both tainting and untainting which could then be passed to these functions. Finally implementing a simple method in the config of choosing which sorter to use.
When using a least utilized method you'll probably want to use Allocatable
instead of Capacity
for the node usage, and may want to think about using least_used as a percentage of the nodes usage rather than raw values (cpu,mem). One reason for this is escalator supports Launch-Templates with multiple instance types, so 1000m
CPU on one node may be less used in reality than say one with 100m
CPU.
Hi @Jacobious52
We did consider maintaining one ASG per AZ but the increased complexity that would introduce (3x the number of ASGs per cluster across multiple clusters) drove us towards alternate solutions.
Thank you for the detailed guide on implementing #105. This definitely gives us a good idea of the effort needed. Quick question though - If we opted for something simpler and instead introduced a configurable parameter per node group that determined whether or not non-empty nodes should be excluded during scale down, would that be acceptable? Do you see this violating any assumptions in escalator? I suppose a "least utilized" strategy could obviate the need for such a configuration parameter as it would naturally de-prioritize empty nodes. However, I do see the value in guaranteeing the sanctity of non-empty nodes in use cases such as ours. What are your thoughts?
I see. Would you be able to briefly explain how Escalator solves that problem? From what I believe Escalator does not do anything smart with AZ balancing compared to cluster autoscaler, it just asks the cloud provider for more capacity and relies on the cloud provider to balance across AZs. Is there something we've missed that helps in your case compared to cluster autoscaler? FWIW, we mainly use Escalator for job based workload ASGs, and run 3x ASGs for stateful ASGs.
For your solution, could this alternatively be solved by adjusting the configuration to suit your requirements for not deleting nodes with pods running? This could possibly done by setting hard_delete_grace_period
to an extremely large number, so the nodes are never force deleted until they are empty? If this method for works for you, we'd be willing to accept an option to have for example, 0
as "infinite", rather than a separate setting?
@Jacobious52 Sorry for the radio silence. Your proposal sounds reasonable and I believe we should be able to contribute a PR for it.
As I understand it, both escalator and the cluster auto scaler make API calls to EC2 for scaling up or scaling down ASGs. The only difference is that escalator does not require maintaining a separate ASG per AZ. The argument simply is, everything else being the same (or similar), it is preferable to use an auto scaler that results in less operational complexity. The workloads we are looking to autoscale do not require persistent storage and therefore are not sensitive to AZ placement. Issues like instance unavailability, AZ failure etc are handled automatically by EC2 (Scaling up automatically spawns instances in AZs where capacity is available). Let me know if this makes sense.
We recently introduced escalator into our platform as a means to auto scale worker nodes in EKS. It has worked great for stateless workloads, realized as Kubernetes
Deployment
s. However, when attempting to use escalator for ASGs withStatefulSet
s, we observed behavior that was, at least initially, unexpected.When scaling down a
StatefulSet
, Kubernetes terminates pods in reverse order of the pod ordinal / index. As an example, for aStatefulSet
with n replicas, podn-1
is terminated first followed byn-2
and so on. Given that we run a single instance ofStatefulSet
pods per EKS worker node, what we see is that Kubernetes scales down the newest pod (pod with highest index) while escalator terminates the oldest node first. This results in the following sequence of events:n-1
running on noden-1
(one of the newer nodes).0
(oldest node). This results in pod0
being evicted and ending up inPENDING
state.0
on noden-1
.As you can imagine, this sequence is disruptive as a stateful workload is being forcefully relocated when terminating the newly vacated node might have been sufficient. Reading through the escalator documentation seemed to suggest this was indeed expected. However, these two statements in particular seemed contradictory to each other:
https://github.com/atlassian/escalator/blob/master/docs/scale-process.md
https://github.com/atlassian/escalator/blob/master/docs/configuration/nodegroup.md
In summary, I have the following comments / questions:
Thanks in advance for any help on this topic and for an amazing product.