kubeflow / mpi-operator

Kubernetes Operator for MPI-based applications (distributed training, HPC, etc.)
https://www.kubeflow.org/docs/components/training/mpi/
Apache License 2.0
417 stars 209 forks source link

add deepspeed example #610

Open kuizhiqing opened 6 months ago

google-oss-prow[bot] commented 6 months ago

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign terrytangyuan for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files: - **[OWNERS](https://github.com/kubeflow/mpi-operator/blob/master/OWNERS)** Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
vsoch commented 2 months ago

@kuizhiqing do you have a suggested place to run / test a setup like this? I've been trying to get just 3 nodes each with a single GPU on Google Cloud and I never get the allocation. I was able to get a single node with one GPU last week but it felt like luck. Will it work without GPU?

vsoch commented 2 months ago

It looks like it defaults to CPU, but it's not clear to me how communication is setup. Is it just using a shared volume at /workspace? if that's the case, what's the point of an operator that supports MPI?

kuizhiqing commented 2 months ago

Hi @vsoch , actually, I've test it in the production environment of my affiliation. I've no idea where to run it in an open platform.

The workspace path of the example contains materials from https://github.com/microsoft/DeepSpeedExamples/tree/master/training/HelloDeepSpeed, it do not involve the communication process. The communication setup by pdsh with the hostfile provided by the mpi-operator.

For the CPU version, I'm afraid I cannot provide more information since I did not work with it.

vsoch commented 2 months ago

https://github.com/microsoft/DeepSpeedExamples/tree/master/training/HelloDeepSpeed, it do not involve the communication process. The communication setup by pdsh with the hostfile provided by the mpi-operator.

There would need to be some variant of an MPI run in there, and the communication (with the MPI operator) would happen via ssh bootstrap and then targeting that hostfile (which if I remember is an envar). I'm trying to understand where that logic is here.