kubernetes-sigs / karpenter

Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
Apache License 2.0
630 stars 205 forks source link

Add NodePoolGroupLimit CRD for limits that span NodePools #1747

Open JacobHenner opened 1 month ago

JacobHenner commented 1 month ago

Description

What problem are you trying to solve?

Today, limits can only be specified on individual NodePools. While this is fine for simple situations, it is insufficient when multiple NodePools comprise a logical grouping of compute that should share a limit. This happens most often when there are important variations in a NodePool's properties beyond its requirements that mandate the use of multiple NodePools, but when they are otherwise related in some way relevant to limits (e.g. same department, team, application, budget line-item).

For example, an organization might group limits by team. A team might require nodes labelled in two distinct ways, necessitating the use of two NodePools. Splitting the team's limit in half for each NodePool might not be sufficient if the balance of nodes between the NodePools varies over time.

I propose a NodePoolGroupLimit CRD (or a similar appropriate name) that would allow a defined limit to apply to NodePools chosen by a selector. If multiple NodePoolGroupLimit objects select the same NodePool, the most prohibitive limit should take precedence.

It might look something like this:

apiVersion: karpenter.sh/v1
kind: NodePoolGroupLimit
metadata:
  name: frontend-bingo
  labels:
    team: frontend
    service: bingo
spec:
  selector:
    # label selector for NodePool labels
    team: frontend
    service: bingo
  limits:
    cpu: "100"
    memory: 128Gi

How important is this feature to you?

k8s-ci-robot commented 1 month ago

This issue is currently awaiting triage.

If Karpenter contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
njtran commented 1 month ago

Based on your request, it seems like it's common for teams to have multiple NodePools, where there's overarching org-wide constraints across the cluster. It seems like you also don't want a necessarily global limit across the cluster too, you want something in between.

Are you willing to open up an RFC to talk about your proposed solution and alternatives to the solution that you've explored?

JacobHenner commented 1 month ago

Based on your request, it seems like it's common for teams to have multiple NodePools, where there's overarching org-wide constraints across the cluster.

Not quite - in my case there are team constraints that need to be applied across multiple NodePools belonging to each team. It'd be insufficient for there to be one NodePool per team, as teams require several different configurations that cannot be expressed using a single NodePool.

It seems like you also don't want a necessarily global limit across the cluster too, you want something in between.

Correct

Are you willing to open up an RFC to talk about your proposed solution and alternatives to the solution that you've explored?

Yes

stevehipwell commented 3 weeks ago

@njtran as it is currently very unlikely that a single node pool could represent even a basic group of compute (even support for AMD64 & ARM64 or support for on-demand & spot can't be handled by a single node pool without significant effort), I'd suggest that this is required for almost all real world scenarios where limits are required.

If done correctly (support {} selector) this approach could also work for #745.