Closed Mhectoratt closed 2 years ago
This is by design. Proxy and candidate pod scheduling constraints are defined separately:
multicluster.admiralty.io/use-constraints-from-spec-for-proxy-pod-scheduling
annotation, as you noted, the opposite is true: the source pod spec constraints are used for proxy pod scheduling, and stripped out for candidate pod scheduling. Virtual nodes are selected, but all real nodes are considered. This mode was introduced to work with AWS Fargate as a candidate scheduler (with multicluster.admiralty.io/no-reservation
too) because scheduling constraints don't make sense for Fargate pods (and are rejected).multicluster.admiralty.io/proxy-pod-scheduling-constraints
, which accepts a YAML pod spec, to define separate constraints for proxy pod scheduling. This is mainly useful to optimize scheduling (aggregate labels may be enough to filter out virtual nodes without having to send candidates to test real nodes). If you need the same constraints for both proxy and candidate pod scheduling (which is unlikely), you'll need to define them twice, in the source pod spec (for candidate pod scheduling) and in this annotation's value.Are you able to define your use case within this framework?
Thank you! I did not understand how to use multicluster.admiralty.io/proxy-pod-scheduling-constraints
. It works as you have explained.
When using the multicluster.admiralty.io/use-constraints-from-spec-for-proxy-pod-scheduling pod level annotation, the proxy pod correctly gets the cluster level scheduling constraints from the original deployment. However, it makes it no further since they are stripped out in the model/delegatepod/model.go class in the following lines:
Removing these lines allowed the cluster level scheduling constraints to flow to the delegate pod.