Open MahdiNazemi opened 1 year ago
@scv119 @matthewdeng @cadedaniel
Any ideas here?
The repro script will help say definitively what the issue is. @MahdiNazemi could you say how you launched the job? E.g. was it from the head node directly, or submitted via ray jobs, or Ray Client?
Potentially duplication of https://github.com/ray-project/ray/issues/32210. It will be great to have a repro script; also can you check when the GPU leaks, whether the ray worker process exited?
@cadedaniel, I'm using Ray with a large repo so I have to try to trim the code a lot to prepare and provide a script. But to answer your question, I run Ray from the head node directly.
Here is some information that may be helpful:
num_samples = 350
metric = "accuracy"
mode = "max"
search_alg = HyperOptSearch(
space=space, metric=metric, mode=mode, n_initial_points=20
)
scheduler = ASHAv2(
time_attr="training_iteration",
metric=metric,
mode=mode,
max_t=120,
grace_period=1,
reduction_factor=35,
)
num_gpus = len([int(s) for s in args.gpus.split(",")])
if args.parallel == "DDP":
trainer = TorchTrainer(
train_loop_per_worker=partial(run_worker_helper, args),
torch_config=TorchConfig(backend="nccl"),
scaling_config=ScalingConfig(
trainer_resources={"CPU": 1},
num_workers=num_gpus,
use_gpu=True,
resources_per_worker={"CPU": args.workers},
),
)
tuner = tune.Tuner(
trainable=trainer,
param_space=param_space,
tune_config=tune_config,
run_config=run_config,
)
result = tuner.fit()
Please let me know if you need additional information.
can you check when the GPU leaks, whether the ray worker process exited?
@scv119, could you please let me know how to do that? Should I try to find the worker process related to that trial and find its status in the dashboard?
also can you check when the GPU leaks, whether the ray worker process exited?
One way to check this is to run nvidia-smi
on the host using the 8 GPUs. It will print out a list of processes using GPUs; it will be interesting to see which Ray process is still present.
There are no running worker processes when the issue occurs. For the problematic trial, nvidia-smi
looks like this a few seconds before and after the out of memory error:
Tue Feb 21 19:30:26 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.05 Driver Version: 525.85.05 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA RTX A6000 On | 00000000:01:00.0 Off | Off |
| 30% 45C P8 27W / 300W | 797MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA RTX A6000 On | 00000000:25:00.0 Off | Off |
| 30% 38C P8 22W / 300W | 773MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA RTX A6000 On | 00000000:41:00.0 Off | Off |
| 30% 43C P8 23W / 300W | 821MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA RTX A6000 On | 00000000:61:00.0 Off | Off |
| 30% 48C P2 76W / 300W | 38253MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 4 NVIDIA RTX A6000 On | 00000000:81:00.0 Off | Off |
| 30% 43C P8 22W / 300W | 27MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 5 NVIDIA RTX A6000 On | 00000000:A1:00.0 Off | Off |
| 30% 39C P8 22W / 300W | 773MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 6 NVIDIA RTX A6000 On | 00000000:C1:00.0 Off | Off |
| 30% 46C P8 22W / 300W | 773MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 7 NVIDIA RTX A6000 On | 00000000:E1:00.0 Off | Off |
| 30% 39C P8 26W / 300W | 773MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 191304 C ...._RayTrainWorker__execute 770MiB |
| 1 N/A N/A 191300 C ...._RayTrainWorker__execute 770MiB |
| 2 N/A N/A 191305 C ...._RayTrainWorker__execute 770MiB |
| 3 N/A N/A 191306 C ...._RayTrainWorker__execute 770MiB |
| 5 N/A N/A 191299 C ...._RayTrainWorker__execute 770MiB |
| 6 N/A N/A 191303 C ...._RayTrainWorker__execute 770MiB |
| 7 N/A N/A 191301 C ...._RayTrainWorker__execute 770MiB |
+-----------------------------------------------------------------------------+
Tue Feb 21 19:30:31 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.05 Driver Version: 525.85.05 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA RTX A6000 On | 00000000:01:00.0 Off | Off |
| 30% 45C P8 34W / 300W | 27MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA RTX A6000 On | 00000000:25:00.0 Off | Off |
| 30% 38C P8 30W / 300W | 3MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA RTX A6000 On | 00000000:41:00.0 Off | Off |
| 30% 43C P8 30W / 300W | 51MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA RTX A6000 On | 00000000:61:00.0 Off | Off |
| 30% 47C P2 77W / 300W | 37483MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 4 NVIDIA RTX A6000 On | 00000000:81:00.0 Off | Off |
| 30% 43C P8 29W / 300W | 27MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 5 NVIDIA RTX A6000 On | 00000000:A1:00.0 Off | Off |
| 30% 39C P8 28W / 300W | 3MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 6 NVIDIA RTX A6000 On | 00000000:C1:00.0 Off | Off |
| 30% 46C P8 32W / 300W | 3MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 7 NVIDIA RTX A6000 On | 00000000:E1:00.0 Off | Off |
| 30% 39C P8 33W / 300W | 3MiB / 49140MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
The memory usage for all processes is around 38G during training. As you can see in the latter call to nvidia-smi
, the GPU memory is not freed on GPU 3, but is almost completely freed on all others.
@MahdiNazemi can you take a look at https://github.com/ray-project/ray/issues/31451#issuecomment-1371590235 and see if this is applicable to your script? Specifically, do you use a DataLoader
with num_workers>0
?
@matthewdeng, yes, I usually set num_workers
to four or eight.
I can rerun the experiment with num_workers=0
and num_workers=1
and report back.
Ah okay I think that's likely it - please try with num_workers=0
as I believe num_workers=1
will still end up launching 1 subprocess, which could run into the same issue.
But num_workers=0
is known to slow down the training a lot. Is there an alternative to keep the number of workers high while avoiding this issue?
The experiment is running, but because each epoch is taking a lot longer, it will take some time before I can report back results.
The team is looking into properly terminating subprocesses, but more investigation is needed to understand how to do so.
Though based on your original observations and the discussion in the other thread, I am wondering if there is a particular codepath in the trial pausing flow that is (sometimes) causing non-graceful termination. @Yard1 do you know? Something like what's controlled by TUNE_FORCE_TRIAL_CLEANUP_S
.
The team is looking into properly terminating subprocesses, but more investigation is needed to understand how to do so.
Great!
I terminated the experiment with num_workers=0
because it was taking forever to train ResNet18 for one epoch.
I have started a new experiment with In theory, this shouldn't have an impact because I'm running all trials for one epoch at first.persistent_workers=True
to see if it makes the issue less likely to happen.
Update 1: The experiment is running fine so far and has evaluated at least one epoch for 43 trials.
Is it possible to share your run_worker_helper
training script @MahdiNazemi?
@justinvyu, sure!
def run_worker_helper(args, config):
if not isinstance(config, dict):
raise ValueError(
f"Input 'config' is not a dict, recieved {type(config)}"
)
args.tune_config = config
hyperopt_to_ray(config)
checkpoint_to_args(config, args)
rank = session.get_local_rank()
world_size = session.get_local_world_size()
run_worker(rank, world_size, args)
where run_worker()
is a function I normally use for DDP in PyTorch. run_worker
creates an object of a class that deals with the training loop, validation, testing, logging, checkpointing, etc.
My code uses a fork of this repository; you can find the class I refer to here.
Here is the gist of run_worker
:
def run_worker(rank, world_size, args):
process_group_params = dict(rank=rank, world_size=world_size)
app = ClassifierCompressorSampleApp(
args,
script_dir=os.path.dirname(__file__),
process_group_params=process_group_params,
)
app.run_training_loop()
if args.tune == "":
app.test()
dist.destroy_process_group()
I'm experiencing the same issue.
I am also experiencing the same issue. One of the GPUs is hanging at the end of training.
@olivierr42 @vsokolovskii do you have a more recent repro script that we can run here?
What happened + What you expected to happen
I have set up an experiment where I use a
TorchTrainer
(to enable DDP with eight GPUs) withASHAv2
scheduler. Each trial is allocated all eight GPUs available on the node. Thegrace_period=1
so each trial is run for just one epoch before it is preempted by another PENDING trial.After a few trials are run until the end of the first milestone, the trainer fails to clear the memory of only one of the GPUs, which causes a CUDA out of memory error for the next trial. This error shows up at different times when I rerun the experiment, e.g., once the memory is cleared correctly for the first five experiments but fails on the sixth, and in another case, this issue occurs on the tenth trial.
To mitigate the issue, I added a
wait_for_gpu()
call at the beginning of my worker function. However, the GPU whose memory is not freed prints the following lines before the program is terminated:The other seven GPUs don't suffer from the said issue.
I reran the experiment a few times, both with and without the
wait_for_gpu()
call, and experienced the same behavior every time.Versions / Dependencies
Ray 2.2.0 Python 3.10.8 PyTorch 1.13.1 Ubuntu 22.04
Reproduction script
Will provide the script ASAP.
Issue Severity
High: It blocks me from completing my task.