Open mcanevet opened 11 months ago
Another example for use is rook-ceph. It creates a lot of mon and osd deployments with numbers and letters trailing the deployment name. Would be good to catch all OSDs called "rook-ceph-osd-*". I just came looking to see if this was possible for this situation.
Hey @mcanevet and @2fst4u, thanks for bringing this up! As I'm not very familiar with the two use-cases you're describing, I hope you can help me understand a bit more about it. Currently, I'm understanding that you have one (ore even many?) of Deployments created by a controller with names that you don't know beforehand and probably could even change over the lifetime of the component? Most likely these generated Deployments are owned by some other k8s resource, probably a custom resource that the controller watches? Is there a 1:1 relationship between the custom resource and the generated Deployment, or could one custom resource result in more than 1 Deployment?
I'm guessing that if you have more than one of these controller-owned Deployments, each of them would need their own VPA, as they could see very different load. If that's the case, a wildcard that catches more than 1 of these Deployments would not yield the desired result – recommendations are created per VPA object. If the recommendations are independent, we need also multiple VPA objects.
If a 1:1 mapping between custom resource and the generated Deployment exists and the custom resource is implemented with support for VPA, it should be possible to point your VPA to the custom resource instead.
Does this help for your use-cases?
@voelzmo I think support for Custom Resource should works, but as it does not, I guess Provider.pkg.crossplane.io
does not implements the /scale subresource.
@voelzmo looks like enabling scale subresource is not possible in the context of Crossplane: https://github.com/crossplane/crossplane/issues/5230#issuecomment-1888706646
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen
@mcanevet: Reopened this issue.
/remove-lifecycle rotten
/area vertical-pod-autoscaler
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Which component are you using?:
Vertical Pod Autoscaler
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
Sometimes the controller managing the set of pods for the autoscaler to control is dynamically named. This is the case for example of Crossplane providers which have a controller of kind
Provider.pkg.crossplane.io
that creates aDeployment
with a generated named, for example:upbound-provider-family-aws-6e68a8d74a6f
Describe the solution you'd like.:
Maybe a solution would be to allow wildcards in the TargetRef, or maybe allow filtering using a selector.
Describe any alternative solutions you've considered.:
I tried to use a
Deployment
as TargetRef with a wildcard, but it does not work. I also tried with aProvider.pkg.crossplane.io
but it does not work either.Additional context.: