Closed Bessonov closed 2 years ago
Found two more use cases:
And a relevant quote from @MaciekPytel about scaling development:
We're happy to accept more provider integrations, but we (core developers) are unable to even support existing ones anymore. Instead each provider has its own owners who maintain it. So a prerequisite to accepting a new provider would be to have someone willing to take responsibility for it.
I think it shows how an generic interface could help.
Found a proposal for generic (gRPC) API: https://github.com/kubernetes/autoscaler/pull/3140 .
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
What's the status of plugable-provider-grpc.md? Seems to be the most promising.
@hectorj2f
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /remove-lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
/reopen
@Bessonov: Reopened this issue.
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
you might want to review / test this: https://github.com/kubernetes/autoscaler/pull/4654
Oh, wow, thank you very much for your work and pointing to the implementation! I think now this issue can be closed.
Why closing? The PR wasn't merged.
As that PR isn't merged in yet would it be best to leave this issue open?
Hey guys, I think this issue should be closed already after #3140 was merged. But I've no stakes to reopen it :)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
https://github.com/kubernetes/autoscaler/pull/4654 has been merged and now the cluster autoscaler has a gRPC based plugin system, probably this issue can be closed.
There is a high demand to allow custom cloud autoscaler provider:
Well, I'm not able to reopen any of them. This request goes beyond the hard coding every possible provider into autoscaler source code base. And I'm not sure why every provider must be integrated into source code, follow the same unpredictable release schedule, review process and follow the same licence (although Apache is fine). It set a limit to scale development.
Furthermore, the integrated providers are very limited to some "standard" things. Some use cases are missing:
A more generic solution could send desired actions to a (single) configurable REST endpoint, possibly to a service inside the cluster. It would allow a decentralized and powerful way to create own autoscaler providers.
I'm aware of Cluster API and the Cluster API Provider. But I'm not sure how it contribute to above use cases.
Maybe I'm just not aware of an existing solution. Is there any workaround for above use cases? Any pointers are appropriate.