Closed kaviarasan-ex2 closed 2 years ago
Any update on this please ?
Cluster Autoscaler does reactive scaling, it doesn't have any predictive capability. If the time it takes to boot up instances is too long for your case, you need to have those instances ready before the scale-up is needed.
If you can predict the spike before it happens, you can build a bunch of different solutions. The common patterns I've seen are:
All of this is largely DIY and relies on you providing the logic that can predict the spike. There is no support for predictive autoscaling in Kubernetes. There may be some projects on github you may be able to leverage, but they're not developed by Kubernetes sig-autoscaling and I have no experience with them.
Thanks for your response!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
Hi there, Seeking experts advise here and below are the details.
Configuration:
Observation:
what we tested:
What are we looking for: Any suggestion or recommendation can be provided on this to solve how do we can do a proactive scaling and avoid the around 3 minutes of delay for the additional nodes to be provisioned when the scaling happens. We would request your response as early as possible. Please let us know if any additional information's are required.