admiraltyio / admiralty

A system of Kubernetes controllers that intelligently schedules workloads across clusters.
https://admiralty.io
Apache License 2.0
683 stars 86 forks source link

Cluster-level scheduling constraints #83

Closed adrienjt closed 3 years ago

adrienjt commented 4 years ago

Currently, pod scheduling constraints (node selector, affinities, etc.) are used to select actual nodes in the target clusters, and if a node can be selected in a target cluster based on those constraints, the corresponding virtual node passes the filter test.

In some use cases, it makes sense for a user to specify constraints at the cluster level (to select a virtual node). For example, two virtual nodes may pass the filter test. How do we select one over the other, without only relying on the proxy scheduler's default constraints (e.g., by default, proxy pods are spread over virtual nodes).

Another use case is AWS Fargate on EKS, where the third-party scheduler (cf. #82) doesn't consider (and rejects) node selectors and affinities, so we need a way to select e.g. a region based on labels on the virtual nodes rather than labels on the real nodes in the target clusters.

The user may want to use the constraints in the pod spec at cluster-level, or use two sets of constraints, the ones in spec at the node level in the target clusters (as currently) and another set (in annotation) at the cluster level.