kubernetes / autoscaler

Autoscaling components for Kubernetes
Apache License 2.0
8.1k stars 3.98k forks source link

[VPA] support dynamic named target reference #6385

Open mcanevet opened 11 months ago

mcanevet commented 11 months ago

Which component are you using?:

Vertical Pod Autoscaler

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:

Sometimes the controller managing the set of pods for the autoscaler to control is dynamically named. This is the case for example of Crossplane providers which have a controller of kind Provider.pkg.crossplane.io that creates a Deployment with a generated named, for example: upbound-provider-family-aws-6e68a8d74a6f

Describe the solution you'd like.:

Maybe a solution would be to allow wildcards in the TargetRef, or maybe allow filtering using a selector.

Describe any alternative solutions you've considered.:

I tried to use a Deployment as TargetRef with a wildcard, but it does not work. I also tried with a Provider.pkg.crossplane.io but it does not work either.

Additional context.:

2fst4u commented 10 months ago

Another example for use is rook-ceph. It creates a lot of mon and osd deployments with numbers and letters trailing the deployment name. Would be good to catch all OSDs called "rook-ceph-osd-*". I just came looking to see if this was possible for this situation.

voelzmo commented 10 months ago

Hey @mcanevet and @2fst4u, thanks for bringing this up! As I'm not very familiar with the two use-cases you're describing, I hope you can help me understand a bit more about it. Currently, I'm understanding that you have one (ore even many?) of Deployments created by a controller with names that you don't know beforehand and probably could even change over the lifetime of the component? Most likely these generated Deployments are owned by some other k8s resource, probably a custom resource that the controller watches? Is there a 1:1 relationship between the custom resource and the generated Deployment, or could one custom resource result in more than 1 Deployment?

I'm guessing that if you have more than one of these controller-owned Deployments, each of them would need their own VPA, as they could see very different load. If that's the case, a wildcard that catches more than 1 of these Deployments would not yield the desired result – recommendations are created per VPA object. If the recommendations are independent, we need also multiple VPA objects.

If a 1:1 mapping between custom resource and the generated Deployment exists and the custom resource is implemented with support for VPA, it should be possible to point your VPA to the custom resource instead.

Does this help for your use-cases?

mcanevet commented 10 months ago

@voelzmo I think support for Custom Resource should works, but as it does not, I guess Provider.pkg.crossplane.io does not implements the /scale subresource.

mcanevet commented 10 months ago

Indeed: https://github.com/crossplane/crossplane/blob/c388baa88eaf2efe59be1638f7be5d775cdf3bff/cluster/crds/pkg.crossplane.io_providers.yaml#L197-L198

mcanevet commented 10 months ago

@voelzmo looks like enabling scale subresource is not possible in the context of Crossplane: https://github.com/crossplane/crossplane/issues/5230#issuecomment-1888706646

k8s-triage-robot commented 7 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 6 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 5 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/autoscaler/issues/6385#issuecomment-2157948681): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
mcanevet commented 5 months ago

/reopen

k8s-ci-robot commented 5 months ago

@mcanevet: Reopened this issue.

In response to [this](https://github.com/kubernetes/autoscaler/issues/6385#issuecomment-2158466578): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes-sigs/prow](https://github.com/kubernetes-sigs/prow/issues/new?title=Prow%20issue:) repository.
mcanevet commented 5 months ago

/remove-lifecycle rotten

adrianmoisey commented 4 months ago

/area vertical-pod-autoscaler

k8s-triage-robot commented 1 month ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 2 weeks ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten