Open jzhn opened 9 months ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Description
What problem are you trying to solve?
Currently, Karpenter make scheduling and disruption decisions with the assumption that pod resource requests are immutable. Since Kubernetes 1.27, InPlacePodVerticalScaling was introduced as a new alpha feature, It's targeting beta post 1.30. With InPlacePodVerticalScaling, pod resource requests and limits become mutable.
A common use case of InPlacePodVerticalScaling is to mitigate startup issues for heavy application like Java, that big resource requests is initially allocated at startup, when the Pod becomes ready, a controller then lowers the resource requests to free up resources on the node.
With current Karpenter implementation, it is possible to create a loop that
Karpenter need to be updated to recognize the mutable resource requests, to prevent such loop. Due to the flexibility of current InPlacePodVerticalScaling, this might be a difficult task if Karpenter itself doesn't understand the resource request mutation strategy.
How important is this feature to you?
Many users have shown interest in InPlacePodVerticalScaling feature. https://github.com/aws/containers-roadmap/issues/512 might provide some datapoint. As a cluster autoscaler, Karpenter's awareness of InPlacePodVerticalScaling is critical so together the node usage efficiency can be further improved while keeping applications stable and performant.