kubeflow / training-operator

Distributed ML Training and Fine-Tuning on Kubernetes
https://www.kubeflow.org/docs/components/training
Apache License 2.0
1.62k stars 701 forks source link

KEP-2170: Add AMD ROCm Torch Distributed Training Runtime #2335

Open astefanutti opened 3 days ago

astefanutti commented 3 days ago

What you would like to be added?

Support ROCm PyTorch distributed training runtime.

Why is this needed?

PyTorch has been advertising support for AMD ROCm and AMD Instinct and Radeon GPUs since version 2.0.

The latest generation AMD Instinct accelerators such as the MI300X makes it possible to run state-of-the-art large scale training jobs as demonstrated in https://developers.redhat.com/articles/2024/10/03/amd-gpus-model-training-openshift-ai.

It would be great to bring ROCm Torch distributed training runtime along with the NVIDIA one brought by #2328.

Generally that would be useful to define how to manage support for multiple accelerators among the training runtimes.

Love this feature?

Give it a 👍 We prioritize the features with most 👍

andreyvelich commented 3 days ago

Thanks for creating this @astefanutti! /remove-label lifecycle/needs-triage /area runtime