Open andreyvelich opened 7 months ago
In addition to that, I saw examples how folks run Kubernetes on multi-VMs with MacOS machines and kubeadm. That might be useful when a single machine can't handle very large ML model.
Interesting. Does MLX support multi-node training?
Not yet. We are working on it. Probably makes sense to follow up on this once we have some basic support there.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Once we implemented the Kubeflow Training V2 APIs we can make MLX work with MPI using mpirun
command 🎉
It looks like distributed communication with MLX uses MPI: https://ml-explore.github.io/mlx/build/html/usage/distributed.html#getting-started
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
/remove-lifecycle stale
MLX is a new ML framework specifically designed to run on Apple silicon: https://github.com/ml-explore/mlx
It has some differences compare to PyTorch with
mps
backend: https://github.com/ml-explore/mlx/issues/12#issuecomment-1843956313It would be nice to integrate MLX in Kubeflow ecosystem for distributed capabilities, and provide a way to run MLX models on Kubernetes.
For example, we can leverage Kubeflow Training Operator for MLX Model Training and Fine-Tuning, and Kubeflow Katib for HyperParameter optimization. Since Kind cluster supports ARM arch, we should explore if we can use M-series GPUs for MLX model training with Kind in the future.
In addition to that, I saw examples how folks run Kubernetes on multi-VMs with MacOS machines and
kubeadm
. That might be useful when a single machine can't handle very large ML model.cc @kubeflow/wg-training-leads @awni