kubernetes / kops

Kubernetes Operations (kOps) - Production Grade k8s Installation, Upgrades and Management
https://kops.sigs.k8s.io/
Apache License 2.0
15.82k stars 4.64k forks source link

Unable to configure disruption controls for karpenter #16311

Closed clayrisser closed 3 weeks ago

clayrisser commented 7 months ago

I am unable to figure out how to add a disruption consolidationPolicy and expireAfter in my karpenter node pools for kops. Where do I configure this?

The karpenter docs discuss this here.

https://karpenter.sh/v0.32/concepts/nodepools/#specdisruption

I'm not even able to see a CRD for karpenter NodePools, so I'm guessing kops has another way of managing the disruption controls?

  disruption:
    consolidationPolicy: WhenUnderutilized
    expireAfter: 720h # 30 * 24h = 720h
moshevayner commented 7 months ago

From what I can tell right now, kOps installs karpenter version 0.31.3 by default which didn't support the nodePools concept yet, according to what I'm seeing in the docs (I hope I'm not wrong there), ref: https://github.com/kubernetes/kops/blob/d489024714013523bb1df74a58eaa9b99f6805b2/pkg/model/components/karpenter.go#L38-L40. This brings me to believe that it's not supported in kOps right now, and thus, we might need to put in some effort to add this.

I don't mind taking a stab at this one, wdyt @hakman @rifelpet @olemarkus ?

hakman commented 7 months ago

I don't mind taking a stab at this one, wdyt @hakman @rifelpet @olemarkus ?

My impression is that, if we want to move Karpenter support to a newer version, we would need to move from providing the LaunchTemplates to doing everything via Karpenter objects.

https://github.com/kubernetes/kops/blob/d489024714013523bb1df74a58eaa9b99f6805b2/upup/models/cloudup/resources/addons/karpenter.sh/k8s-1.19.yaml.template#L1796-L1874

moshevayner commented 7 months ago

My impression is that, if we want to move Karpenter support to a newer version, we would need to move from providing the LaunchTemplates to doing everything via Karpenter objects.

Yeah, that makes sense to me. So, would that be (theoretically) a somewhat similar process to any other cloudup add-on such as aws-cni, in which we'll update the template (and potentially supporting resources such as template functions etc.) according to the vendor chart?

hakman commented 7 months ago

Yes. The good part is that we have a Karpenter e2e test, so should be easy to test via WIP PR.

moshevayner commented 7 months ago

Sounds good! I'll give that a try. Thanks!

/assign

teocns commented 5 months ago

From my understanding it's unlikely possible but doesn't hurt to ask if there is any workaround for getting upstream Karpenter to manage current kOps's release InstanceGroups?

k8s-triage-robot commented 2 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 3 weeks ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 3 weeks ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/kops/issues/16311#issuecomment-2285256851): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.