kubernetes / kubernetes

Production-Grade Container Scheduling and Management
https://kubernetes.io
Apache License 2.0
111.12k stars 39.68k forks source link

[FG:InPlacePodVerticalScaling] If pod resize request exceeds node allocatable, fail it in admission handler #114203

Open vinaykul opened 1 year ago

vinaykul commented 1 year ago

What would you like to be added?

A great optimization to the current 'Infeasible' state would be to create a admission handler that tracks allocatable resources on all the nodes, and fails such a request early. See https://github.com/kubernetes/kubernetes/pull/102884#discussion_r817033754

Why is this needed?

User gets immediate signal that the request will never be satisfied on the given node.

vinaykul commented 1 year ago

/sig node

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

vinaykul commented 1 year ago

/remove-lifecycle rotten

pbialon commented 1 year ago

/assign

pbialon commented 1 year ago

I wanted to raise an issue here @vinaykul. During implementation of admission controller I encountered a following code in the LimitRanger:

    // Since containers and initContainers cannot currently be added, removed, or updated, it is unnecessary
    // to mutate and validate limitrange on pod updates. Trying to mutate containers or initContainers on a pod
    // update request will always fail pod validation because those fields are immutable once the object is created.
    if a.GetKind().GroupKind() == api.Kind("Pod") && a.GetOperation() == admission.Update {
        return false
    }

Since, after enabling InPlacePodVerticalScaling the statement in the comment will no longer be true (we may update the container), I wonder if we track this issue anywhere? I think that we need to handle it in LimitRanger since we can go over the resource limits when using vertical pod autoscaling.

k8s-triage-robot commented 9 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 8 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 7 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/kubernetes/issues/114203#issuecomment-2016427299): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
tallclair commented 6 months ago

/reopen /remove-lifecycle rotten /triage accepted

k8s-ci-robot commented 6 months ago

@tallclair: Reopened this issue.

In response to [this](https://github.com/kubernetes/kubernetes/issues/114203#issuecomment-2108944976): >/reopen >/remove-lifecycle rotten >/triage accepted Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
tallclair commented 6 months ago

Thanks @pbialon. I opened https://github.com/kubernetes/kubernetes/issues/124855 for the limit ranger case.

dshebib commented 4 months ago

I can look into this if new contributor is needed @tallclair