lucidrains / magvit2-pytorch

Implementation of MagViT2 Tokenizer in Pytorch
MIT License
565 stars 34 forks source link

Running multi-gpu hangs after first step #18

Open jpfeil opened 1 year ago

jpfeil commented 1 year ago

I'm using accelerate multi-gpu support to run on a cluster of A100 gpus.

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------In which compute environment are you running?
This machine
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Which type of machine are you using?
multi-GPU
How many different machines will you use (use more than 1 for multi-node training)? [1]:
Should distributed operations be checked while running for errors? This can avoid timeout issues but will be slower. [yes/NO]:
Do you wish to optimize your script with torch dynamo?[yes/NO]:
Do you want to use DeepSpeed? [yes/NO]:
Do you want to use FullyShardedDataParallel? [yes/NO]:
Do you want to use Megatron-LM ? [yes/NO]:
How many GPU(s) should be used for distributed training? [1]:4
What GPU(s) (by id) should be used for training on this machine as a comma-seperated list? [all]:0,1,2,3
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Do you wish to use FP16 or BF16 (mixed precision)?
fp16

I can train on a single GPU, but multi-gpu hangs for me. Is there a recommended configuration for running multi-GPU training?

lucidrains commented 1 year ago

@jpfeil ah, i don't know from first glance at the code, and don't have access to multi-gpu at the moment

lucidrains commented 1 year ago

@jpfeil did you resolve the other two issues that are open on single gpu?

lucidrains commented 1 year ago

@jpfeil could you try commenting out these two lines and see if it gets past the first step?

jpfeil commented 1 year ago

@lucidrains I couldn't get multi-gpu to work, so I'm moving forward with single-gpu. I tried running imagenet, but I get the adaptive adversarial weight going to nan which causes the loss to become nan:

LossBreakdown(recon_loss=tensor(0.0777, device='cuda:0', grad_fn=), lfq_aux_loss=tensor(0.0022, device='cuda:0', grad_fn=), quantizer_loss_breakdown=LossBreakdown(per_sample_entropy=tensor(0.0003, device='cuda:0', grad_fn=), batch_entropy=tensor(0.0003, device='cuda:0', grad_fn=), commitment=tensor(0.0024, device='cuda:0', grad_fn=)), perceptual_loss=tensor(0.2947, device='cuda:0', grad_fn=), adversarial_gen_loss=tensor(0.0186, device='cuda:0', grad_fn=), adaptive_adversarial_weight=tensor(nan, device='cuda:0'), multiscale_gen_losses=[], multiscale_gen_adaptive_weights=[])

Is there a check we can add here that will allow the training to continue?

lucidrains commented 1 year ago

@jpfeil ahh, hard to know without doing training myself and ironing out the issues

try 0.1.43, and if that doesn't work, i'll get around to it this weekend

hzphzp commented 10 months ago

Same issue, it hangs when training with multi-gpus.

ziyannchen commented 7 months ago

Caught the same problem here. Multi-GPU training would stuck in step 1 while single-GPU training works fine. I did some debugging. The first step always works fine until the second step it stuck in the last self.accelerator.backward in the accumulated grad ops. Specifically, in trainer.py

def train_step(self, dl_iter):
    for grad_accum_step in range(self.grad_accum_every):
          ....
          is_last = grad_accum_step == (self.grad_accum_every - 1)
          context = partial(self.accelerator.no_sync, self.model) if not is_last else nullcontext

          data, *_ = next(dl_iter)
          self.print(f'accum step {grad_accum_step} {data} {data.shape}')

          with self.accelerator.autocast(), context():
              loss, loss_breakdown = self.model(
                  data,
                  return_loss = True,
                  adversarial_loss_weight = adversarial_loss_weight,
                  multiscale_adversarial_loss_weight = multiscale_adversarial_loss_weight
              )
              self.print(f'l355 loss {loss.shape} {loss}')
              self.accelerator.backward(loss / self.grad_accum_every) # stuck here in the last accum step
              self.print('l357 backward') # This will never print until timeout (only the last accum iter in the second step)

Also I found that there is a warning at the same time(last accum backward step) from the first step, reporting as

UserWarning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed.  This is not an error but may impair performance.
grad.sizes() = [32, 64, 1, 1], strides() = [64, 1, 64, 64]
bucket_view.sizes() = [32, 64, 1, 1], strides() = [64, 1, 1, 1]

I'm not sure if they are related problems.

ziyannchen commented 6 months ago

I've done some debugging. I believed that some reasons caused this hanging, such as my linux kernel is too old that it can't support latest version of torch and accelerate, or unsupported mixed-precision.

However, it turns out my problem is actually highly related to https://discuss.pytorch.org/t/torch-distributed-barrier-hangs-in-ddp/114522/7.

It is the validation in the main process caused the stuck. Change the following line of class VideoTokenizerTrainer in trainer.py

def valid_step(...):
    # self.model(...)
    # change the upper line to use local model instead of DDP model
    self.model.module(...)

This has solved my multi-GPU training stuck problem.

lucidrains commented 6 months ago

@ziyannchen hey, thanks for the debug

do you want to see if 0.4.3 works without your modification?