ray-project / ray

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
https://ray.io
Apache License 2.0
33.24k stars 5.62k forks source link

Issue on page /index.html /How to make MIG GIs visible in a Ray session, for allocating a different worker in a different MIG GI. #32778

Open ChristosPeridis opened 1 year ago

ChristosPeridis commented 1 year ago

Dear members of the Ray team,

I am working with DRL algorithms using rllib. I am configuring and testing multiple experiments using the Tune API (tune.run()) as well as the different implemented DRL algorithms that the rllib API offers. I am running my code in a server machine equipped with two Nvidia RTX A100 GPUs. In this server I have configured the two A100s with MIG configuration of "MIG 1g.5gb". This splits each A100 in 7 GIs (GPU Instances). Each GI has a unique UUID. I want to run the DDPPO algorithm and each worker to use one of the 14 in total available MIG GIs. How can I do this?

I have tried to update the environ dictionary and add a key "CUDA_VISIBLE_DEVICES" with a list of all available MIG GIs IDs that I want to use before initializing a Ray session. However it did not work. Then I tried instead passing the IDs as numbers, "0, 1, 2, ..." but that did not work either.

Could you please provide me with some advice on how I should set up my system in order to be able to leverage the different GIs?

I am always at your disposal for any further queries regarding my use case and set up.

Thank you very much for your valuable help!

Kind regards,

Christos Peridis

stale[bot] commented 1 year ago

Hi, I'm a bot from the Ray team :)

To help human contributors to focus on more relevant issues, I will automatically add the stale label to issues that have had no activity for more than 4 months.

If there is no further activity in the 14 days, the issue will be closed!

You can always ask for help on our discussion forum or Ray's public slack channel.

stale[bot] commented 11 months ago

Hi again! The issue will be closed because there has been no more activity in the 14 days since the last message.

Please feel free to reopen or open a new issue if you'd still like it to be addressed.

Again, you can always ask for help on our discussion forum or Ray's public slack channel.

Thanks again for opening the issue!

jjyao commented 10 months ago

@ChristosPeridis sorry for missing this one.

Have you tried CUDA_VISIBLE_DEVICES=uuid1,uuid2....,uuid14 ray start --num-gpus=14 This will start a Ray node with 14 GPUs (each is one 1g.5gb).

joe-schwartz-certara commented 4 months ago

I have a similar issue. I'm trying to allocate VLLM with tensor_parallelism=2 onto two MIG partitions. I'm exposing them via CUDA_VISIBLE_DEVICES=uuid,etc as @jjyao suggests but I get.

`

My UUID's for MIG devices begin with MIG- instead of GPU- so maybe this is a clue to why it isn't working. I have been running this model and other models with tensor_parallelism=2 fine and models with tensor_parallelism=1 on mig devices fine as well. The combination of both pieces seems to be an issue for ray