Note that everything is experimental and may change significantly at any time.
Voda scheduler is a GPU scheduler for elastic deep learning workloads based on Kubernetes, Kubeflow Training Operator and Horovod.
Voda Scheduler is designed to be easily deployed in any Kubernetes cluster. For more architectural details, see design.
Contents
Elastic training enables the distributed training jobs to be scaled up and down dynamically at runtime, without interrupting the training process.
With elastic training, the scheduler can make training jobs utilize idle resources if there are any and make the most efficient resource allocations if the cluster is heavily-loaded, thus increasing cluster throughput and reducing overall training time.
For more information about elastic training, see Elastic Horovod, Torch Distributed Elastic or Elastic Training.
Voda Scheduler provides several critical features for elastic deep learning workloads as follows:
Checkout the demo to see how resource allocations are dynamically adjusted (and how worker pods are migrated) to maximize cluster throughput
A Kubernetes cluster, on-cloud or on-premise, that can schedule GPUs. Voda Scheduler is tested with v1.20
Algorithm | Elastic | Reference |
---|---|---|
FIFO | ||
Elastic-FIFO (default) | :heavy_check_mark: | |
SRJF | ||
Elastic-SRJF | :heavy_check_mark: | |
Tiresias | Gu, Juncheng, et al. "Tiresias: A GPU cluster manager for distributed deep learning." 16th USENIX Symposium on Networked Systems Design and Implementation (NSDI 19). 2019. https://www.usenix.org/conference/nsdi19/presentation/gu | |
Elastic-Tiresias | :heavy_check_mark: | Wu, Yidi, et al. "Elastic Deep Learning in Multi-Tenant GPU Clusters." IEEE Transactions on Parallel and Distributed Systems (2021). https://ieeexplore.ieee.org/abstract/document/9373916 |
FfDL Optimizer | :heavy_check_mark: | Saxena, Vaibhav, et al. "Effective elastic scaling of deep learning workloads." 2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS). IEEE, 2020. https://ieeexplore.ieee.org/abstract/document/9285954 |
AFS-L | :heavy_check_mark: | Shin, Jinwoo, and KyoungSoo Park. "Elastic Resource Sharing for Distributed Deep Learning." (2021) https://www.usenix.org/system/files/nsdi21-hwang.pdf |
T. -T. Hsieh and C. -R. Lee, "Voda: A GPU Scheduling Platform for Elastic Deep Learning in Kubernetes Clusters," 2023 IEEE International Conference on Cloud Engineering (IC2E), Boston, MA, USA, 2023, pp. 131-140, doi: 10.1109/IC2E59103.2023.00023. https://ieeexplore.ieee.org/document/10305838
@INPROCEEDINGS{10305838,
author={Hsieh, Tsung-Tso and Lee, Che-Rung},
booktitle={2023 IEEE International Conference on Cloud Engineering (IC2E)},
title={Voda: A GPU Scheduling Platform for Elastic Deep Learning in Kubernetes Clusters},
year={2023},
volume={},
number={},
pages={131-140},
doi={10.1109/IC2E59103.2023.00023}}