Lightning-Universe / lightning-transformers

Flexible components pairing 🤗 Transformers with :zap: Pytorch Lightning
https://lightning-transformers.readthedocs.io
Apache License 2.0
607 stars 77 forks source link

DeepSpeed integration broken: Tensors must be CUDA and dense #259

Closed xinyangz closed 2 years ago

xinyangz commented 2 years ago

🐛 Bug

To Reproduce

Installed latest lightning-transformers, pytorch-lightning, deepspeed. Versions see bellow.

Code sample

Following the latest doc example:

import pytorch_lightning as pl
from transformers import AutoTokenizer

from lightning_transformers.task.nlp.text_classification import (
    TextClassificationDataModule,
    TextClassificationTransformer,
    TextClassificationDataConfig,
)

tokenizer = AutoTokenizer.from_pretrained(
    pretrained_model_name_or_path="bert-base-cased"
)
dm = TextClassificationDataModule(
    cfg=TextClassificationDataConfig(
        batch_size=1,
        dataset_name="emotion",
        max_length=512,
    ),
    tokenizer=tokenizer,
)
model = TextClassificationTransformer(pretrained_model_name_or_path="bert-base-cased")

trainer = pl.Trainer(accelerator="auto", devices="auto", max_epochs=1, strategy='deepspeed', precision=16)

trainer.fit(model, dm)

Error Message

Traceback (most recent call last):
  File "ds.py", line 26, in <module>
    trainer.fit(model, dm)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit
    self._call_and_handle_interrupt(
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 721, in _call_and_handle_interrupt
    return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 93, in launch
Traceback (most recent call last):
  File "/shared/data2/xz43/tmp/ds.py", line 26, in <module>
    trainer.fit(model, dm)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit
    return function(*args, **kwargs)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl
    self._call_and_handle_interrupt(
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 723, in _call_and_handle_interrupt
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1217, in _run
    return trainer_fn(*args, **kwargs)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1217, in _run
    self.strategy.setup(self)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/strategies/deepspeed.py", line 362, in setup
    self.init_deepspeed()
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/strategies/deepspeed.py", line 461, in init_deepspeed
    self._initialize_deepspeed_train(model)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/strategies/deepspeed.py", line 490, in _initialize_deepspeed_train
    optimizer, lr_scheduler, _ = self._init_optimizers()
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/strategies/deepspeed.py", line 466, in _init_optimizers
    optimizers, lr_schedulers, optimizer_frequencies = _init_optimizers_and_lr_schedulers(self.lightning_module)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 180, in _init_optimizers_and_lr_schedulers
    optim_conf = model.trainer._call_lightning_module_hook("configure_optimizers", pl_module=model)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1595, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/lightning_transformers/core/model.py", line 84, in configure_optimizers
    num_training_steps, num_warmup_steps = self.compute_warmup(
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/lightning_transformers/core/model.py", line 128, in compute_warmup
    num_training_steps = self.num_training_steps
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/lightning_transformers/core/model.py", line 123, in num_training_steps
    return self.trainer.estimated_stepping_batches
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 2824, in estimated_stepping_batches
    self.strategy.setup(self)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/strategies/deepspeed.py", line 362, in setup
    self.reset_train_dataloader()
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1891, in reset_train_dataloader
    if has_len_all_ranks(self.train_dataloader, self.strategy, module)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/utilities/data.py", line 124, in has_len_all_ranks
    total_length = training_type.reduce(torch.tensor(local_length).to(model.device), reduce_op="sum")
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 344, in reduce
    tensor = sync_ddp_if_available(tensor, group, reduce_op=reduce_op)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py", line 95, in sync_ddp_if_available
    return sync_ddp(result, group=group, reduce_op=reduce_op)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py", line 137, in sync_ddp
    torch.distributed.all_reduce(result, op=op, group=group, async_op=False)
  File "/home/xz43/.pyenv/versions/pl-transformers/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1287, in all_reduce
    work = group.allreduce([tensor], opts)
RuntimeError: Tensors must be CUDA and dense

Expected behavior

Code should run.

Environment

Additional context

Seems like an error from compute_warmup, where self.trainer.estimated_stepping_batches generated the error. Might be a pytorch-lightning bug.

SeanNaren commented 2 years ago

This issue is related to this: https://github.com/Lightning-AI/lightning/issues/12317#issuecomment-1151113583

SeanNaren commented 2 years ago

This should be fixed on Lightning master, we're just waiting on another release to close this issue.

SeanNaren commented 2 years ago

With the latest release of pytorch-lightning, this should be fixed!