kubernetes-sigs / karpenter

Karpenter is a Kubernetes Node Autoscaler built for flexibility, performance, and simplicity.
Apache License 2.0
537 stars 177 forks source link

Support dynamic deployment options for karpenter binary #1400

Open elmiko opened 2 months ago

elmiko commented 2 months ago

Description

What problem are you trying to solve?

As a karpenter user, I would like to have more options for where I run the karpenter binary (e.g. in a different cluster than the autoprovisioned cluster), having a way to specify a kubeconfig for the karpenter API objects (NodePool, NodeClass, NodeClaim) and a kubeconfig for the monitored API objects (Node, Pod) would solve my problem by allowing me to run karpenter where I choose.

How important is this feature to you?

This feature is important because not all users will want to run karpenter in the same cluster that is being actvely auto-provisioned. For example, in a hub-spoke topology kubernetes service, an operator may want to isolate all control-plane controllers to a hub cluster, while compute clusters would be delivered to users for their workloads. In this style topology, the karpenter API objects might exist in the hub cluster, while the spoke clusters contain the Node and Pod objects that affect karpenter's decisions.

Additionally, for the cluster-api karpenter provider this feature will be utilized heavily as cluster-api allows for hub-spoke, as well as single, cluster topologies. Without this feature, a karpenter provider will need to use a multiplexing, or API discriminatory client, to navigate the split in kubeconfigs. The notion of multiplexing clients is an area of active investigation, but there do not appear to be any concrete implementations at time of writing.

This is not to suggest that a single karpenter instance would manage multiple clusters. Karpenter would still be a single cluster provisioning tool, but it could be run within a namespace of a different cluster than it manages.

k8s-ci-robot commented 2 months ago

This issue is currently awaiting triage.

If Karpenter contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
elmiko commented 2 months ago

had a great discussion about this at the karpenter meeting today, there are a couple complicating factors that make this challenging:

it is unlikely that the data model for karpenter will change at this point to support the namespacing of the karpenter CRDs. it appears that for the time being we could certainly place the karpenter binary wherever we like, but the karpenter CRDs will need to live in the same cluster as the Nodes and Pods it is observing.

on cluster-api this means the the most likely supported configuration will be to supply a kubeconfig for the karpenter CRDs and observed Nodes/Pods, and an optional kubeconfig for the location of the cluster-api CRDs. this would allow deployment of the karpenter binrary in either hub or spoke clusters, but the karpenter CRDs must exist in the spoke cluster.