Open vinaykul opened 1 year ago
/sig node
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
/assign
I wanted to raise an issue here @vinaykul.
During implementation of admission controller I encountered a following code in the LimitRanger
:
// Since containers and initContainers cannot currently be added, removed, or updated, it is unnecessary
// to mutate and validate limitrange on pod updates. Trying to mutate containers or initContainers on a pod
// update request will always fail pod validation because those fields are immutable once the object is created.
if a.GetKind().GroupKind() == api.Kind("Pod") && a.GetOperation() == admission.Update {
return false
}
Since, after enabling InPlacePodVerticalScaling
the statement in the comment will no longer be true (we may update the container), I wonder if we track this issue anywhere?
I think that we need to handle it in LimitRanger
since we can go over the resource limits when using vertical pod autoscaling.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen /remove-lifecycle rotten /triage accepted
@tallclair: Reopened this issue.
Thanks @pbialon. I opened https://github.com/kubernetes/kubernetes/issues/124855 for the limit ranger case.
I can look into this if new contributor is needed @tallclair
What would you like to be added?
A great optimization to the current 'Infeasible' state would be to create a admission handler that tracks allocatable resources on all the nodes, and fails such a request early. See https://github.com/kubernetes/kubernetes/pull/102884#discussion_r817033754
Why is this needed?
User gets immediate signal that the request will never be satisfied on the given node.