Open astefanutti opened 3 days ago
Support ROCm PyTorch distributed training runtime.
PyTorch has been advertising support for AMD ROCm and AMD Instinct and Radeon GPUs since version 2.0.
The latest generation AMD Instinct accelerators such as the MI300X makes it possible to run state-of-the-art large scale training jobs as demonstrated in https://developers.redhat.com/articles/2024/10/03/amd-gpus-model-training-openshift-ai.
It would be great to bring ROCm Torch distributed training runtime along with the NVIDIA one brought by #2328.
Generally that would be useful to define how to manage support for multiple accelerators among the training runtimes.
Give it a 👍 We prioritize the features with most 👍
Thanks for creating this @astefanutti! /remove-label lifecycle/needs-triage /area runtime
What you would like to be added?
Support ROCm PyTorch distributed training runtime.
Why is this needed?
PyTorch has been advertising support for AMD ROCm and AMD Instinct and Radeon GPUs since version 2.0.
The latest generation AMD Instinct accelerators such as the MI300X makes it possible to run state-of-the-art large scale training jobs as demonstrated in https://developers.redhat.com/articles/2024/10/03/amd-gpus-model-training-openshift-ai.
It would be great to bring ROCm Torch distributed training runtime along with the NVIDIA one brought by #2328.
Generally that would be useful to define how to manage support for multiple accelerators among the training runtimes.
Love this feature?
Give it a 👍 We prioritize the features with most 👍