Open rcanavan opened 6 months ago
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
You should report the issue at the main kubernetes repo, with label sig scheduling
.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
We're trying to reduce AWS EC2 costs by reducing the number of nodes to 0 automatically during times the enter cluster is not needed using an autoscaling_schedule (see e.g. https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/autoscaling_schedule). We have observed that, when a single node is started after no nodes at all have been available, the kube-scheduler may start a large number of Deployments on that node, leaving insufficient resources (in our case memory) for the pods of all DaemonSets to be started on that node. That situation was not rectified automatically after adding more nodes to the cluster, instead, I had to cordon the server and terminate some pods on the first node for them to be moved to another node.
It would be preferable if resources for DaemonSets were reserved before Workloads that are not tied to nodes (Deployments, ReplicaSets, ...) are scheduled.