kubernetes-sigs / karpenter

Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
Apache License 2.0
521 stars 170 forks source link

feat: Consolidation tolerance #795

Open stevenpitts opened 9 months ago

stevenpitts commented 9 months ago

Description

What problem are you trying to solve?

I am trying to reduce the frequency of consolidation on clusters that have frequent but insignificant resource request changes.

An active cluster can cause frequent consolidation events. For example, if a deploy with HPA scales up and down one replica every 10 minutes, it's very likely that a new node will be spun up and then spun down every 10 minutes, such that cost is optimized. This could even result in a packed node getting deleted, if Karpenter decides that a different node type or number of nodes would be more cost efficient.

That can be really disruptive. PDBs help, but in order for them to guard against users experiencing slowness you'd need to set a PDB of practically 1% maxUnavailable.

Once a consolidationPolicy of WhenUnderutilized works alongside consolidateAfter, that will help out greatly, but it would still result in consolidation likely happening every (for example) 2 hours, even with very low net resource changes.

I think a way of configuring "consolidation tolerance" would help here. One implementation could be a way of specifying cost tolerance. In pseudo-configuration, there could be a consolidationCostTolerance field that I might set as "$50 per hour". If an HPA decides a deploy needs a new replica and there's no space, it would spin up a new combination of nodes that has enough space for all desired pods but is still cost effective. Later on, the HPA might decrement desired replicas. Karpenter would normally want to consolidate now since there's now a more cost effective combination of nodes for requested resources. The idea is, consolidation would not happen unless currentCostPerHour - consolidatedCostPerHour is greater than $50. This way, until there is a significant amount of unused resources on nodes, consolidation would not trigger.

How important is this feature to you?

This feature is fairly important. Even when all the features described in disruption controls become stable, existing solutions only reduce the frequency of consolidation, slow down consolidation, or block consolidation during certain hours. We could set a PDB on all deploys of 1% maxUnavailable, but that feels like a pretty extreme demand.

ellistarn commented 9 months ago

We've discussed the idea of an "improvement threshold" https://github.com/aws/karpenter-core/pull/768/files#diff-e6f78172a1d86c735a03ec76853021c670f4203f387c45b601670eca0e2ae1a4R26, which may model this quite nicely. Thoughts?

stevenpitts commented 9 months ago

We've discussed the idea of an "improvement threshold" https://github.com/aws/karpenter-core/pull/768/files#diff-e6f78172a1d86c735a03ec76853021c670f4203f387c45b601670eca0e2ae1a4R26, which may model this quite nicely. Thoughts?

That does seem like what I'm looking for! The design doc appears primarily focused on a spot issue I'm not too familiar with, but

Note: Regardless of the decision made to solve the spot consolidation problem, we’d likely want to implement a price improvement in the future to prevent consolidation from interrupting nodes to make marginal improvements.

:+1:

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

stevenpitts commented 6 months ago

/remove-lifecycle stale

Since it's still not totally clear what direction the project is going in with regard to this problem

sumeet-baghel commented 5 months ago

@stevenpitts What is your current strategy to mitigate this problem?

Have you tried creating a custom PriorityClass with a higher priority for critical workloads? This might help in a scenario where karpenter decides to delete a few nodes.

I haven't used karpenter myself so might be a dumb question.

stevenpitts commented 5 months ago

@sumeet-baghel Hello stranger! Right now we're just relying on do-not-disrupt annotations for temporary critical workloads. Haven't found a great solution yet.

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

stevenpitts commented 2 months ago

/remove-lifecycle stale

ellistarn commented 2 months ago

Anyone interested in picking up "PriceImprovementThreshold"?

stevenpitts commented 2 months ago

@ellistarn I think that from the RFC it's unclear what the maintainers think the solution should look like. Is there a more specific doc I should read about it? Or are you still looking for feedback/opinions on the RFC?