Open mariadb-JeffBachtel opened 3 years ago
Hello. Any updates?
Hi team, it will be really handy to have this feature as it will reduce the time for Pending pods to get scheduled via cluster autoscaler.
I agree this would be a great feature to have though thinking it through it may be difficult to implement at this time. Spit balling:
How would CA identify nodes that are in the Warm Group? At the moment they're not tagged with anything identifiable other than the normal tags defined by the Launch Template out of the ASG. So would CA just assume anything in a Stopped
state that's in the ASG is a "Warm Node" that can be spun up again?
You need graceful node shutdown enabled as well as we the way it currently works, the instance shuts down before K8s can fully terminate pods on the warm node not needed anymore and so pods get stuck.
Do Warm Nodes ever get terminated by CA? When/why?
I'm sure there's more....
Ive been facing the same issue and have a workaround for the same which goes like this:
However this workaround has some shortcomings , the nodes might take longer than expected to scale down and the warming of instance might take longer as well.
If the Lifecycle of insatnces can be used to accomodate warmpool feature in cluster autoscaler would be really great.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This is a very useful feature.
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Hi @vijaykumarmcp,
I am not sure if you're facing the same issue described in this issue, but when you say in https://github.com/kubernetes/autoscaler/issues/4005#issuecomment-1012237034:
When the node is called from warmpool to join ASG the Lifecycle changes to Pending and the user-data script is again called at start and then it can let the node join the cluster.
Actually, the warm pool node is never called by the Cluster Autoscaler to join the ASG and this is the problem. So the instance is never started when needed by the Cluster Autoscaler and so the user data script is never triggered.
As I verified, the only way to let the warm pool nodes join the ASG is to modify the desired capacity of the ASG manually. Maybe I'm missing the point in your steps, if so, please let me know.
Thanks!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
This doesn't make CA fundamentally aware of warm pools, but you can configure a self-managed node group to work with warm pools per example here. Only issue is that because CA terminates specific instances during scale-in using the TerminateInstanceInAutoScalingGroup API, the instance reuse policy is ignored, i.e. does not work.
This doesn't make CA fundamentally aware of warm pools, but you can configure a self-managed node group to work with warm pools per example here. Only issue is that because CA terminates specific instances during scale-in using the TerminateInstanceInAutoScalingGroup API, the instance reuse policy is ignored, i.e. does not work.
Hi @jebbens - We have been trying to use warm pools with CA, however the node join the cluster in a NotReady
status, and CA fails as it doesn't know about the instances.
I was looking at the repo which you linked, we have our own custom launch template EKS module therefore using the above module would not work for us. However from looking through the code is the main takeaway in the userdata script, whereby we would need to update this so that if nodes are part of the warm pool, they don't automatically join the cluster. https://github.com/aws-samples/eks-node-group-with-warm-pool/blob/main/user_data/node-config.tftpl#L111
Should that solve our issue?
Thanks
Hello @robbo10. Yes, the user-data is key, plus a few other configs. From the first paragraph of the README: "The key components that enable this are the user data, initial lifecycle policy, warm pool configuration, and the additional IAM permissions." Hope that helps!
Hello all,
Is there any update about this feature ?
Thanks.
Hello all,
I just found this which could make use of AWS ASG warm pool with CA, but unfortunately the EC2 instance will still be terminated instead of back to warm pool.
And according to information from here:
Warm pool instance reuse policies do not currently work with CA, which terminates specific nodes/instances and decrements desired capacity of the ASG via the TerminateInstanceInAutoScalingGroup API.
Seems the EC2 termination is invoked by CA internally, but at least the node (EC2) start time is significantly reduced.
Thanks all.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Which component are you using?: cluster-autoscaler
Is your feature request designed to solve a problem? If so describe the problem this feature should solve.: Scaling up AWS EC2 instances can take some time, with ASG operations
Describe the solution you'd like.: I'd like cluster-autoscaler to be aware of AWS warm pools, a new feature briefly described in https://aws.amazon.com/about-aws/whats-new/2021/04/amazon-ec2-auto-scaling-introduces-warm-pools-accelerate-scale-out-while-saving-money/
Describe any alternative solutions you've considered.:
Additional context.: