AI-Hypercomputer / xpk

xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerators such as TPUs and GPUs on GKE.
Apache License 2.0
81 stars 23 forks source link

Consider configuring kueue waitForPodsReady #191

Open avrittrohwer opened 1 month ago

avrittrohwer commented 1 month ago

kueue supports all-or-nothing scheduling: https://kueue.sigs.k8s.io/docs/tasks/manage/setup_wait_for_pods_ready/

Large multi-pod workloads that need every pod to be running to make progress (e.g. single-program-multi-data workloads) can deadlock capacity if the physical availability of resources does not match the configured kueue quotas. The kueue waitForPodsReady feature configures kueue to additionally monitor pod readiness condition for workloads. If not all pods become ready within a configured timeout, the workload is evicted and requeued.

PBundyra commented 1 month ago

Hi @avrittrohwer! I like the idea. Do you suggest using default WaitForPodsReady or maybe make it configurable with some xpk flag? I'm leaning towards enabling it by default with default values

avrittrohwer commented 1 month ago

I'm not sure the waitForPodsReady configuration would be good in all scenarios, for example the default waitForPodsReady.timeout is 5m, if the cluster using using node auto-provisioning it is likely that timeout is too short

The kueue configuration is stored in a configmap (https://kueue.sigs.k8s.io/docs/installation/#install-a-custom-configured-released-version) so users could easily just update that configmap in their cluster. Another idea is to introduce a config directory concept in xpk, we could keep a yaml representation of the kueue configmap which users could edit on disk (and commit to source control) then xpk could take care of ensuring the cluster state matches the state on disk

PBundyra commented 1 month ago

WDYT @44past4