Lightning-Universe / lightning-transformers

Flexible components pairing 🤗 Transformers with :zap: Pytorch Lightning
https://lightning-transformers.readthedocs.io
Apache License 2.0
610 stars 77 forks source link

[Bug] Problems when training with hydra #248

Closed dongheehand closed 2 years ago

dongheehand commented 2 years ago

🐛 Bug

To Reproduce

Steps to reproduce the behavior:

Run the following command

HYDRA_FULL_ERROR=1 python train.py task=nlp/language_modeling dataset=nlp/language_modeling/wikitext trainer.devices=1 training.batch_size=8

Error msg

Error executing job with overrides: ['task=nlp/language_modeling', 'dataset=nlp/language_modeling/wikitext', 'trainer.devices=1', 'training.batch_size=8']
Traceback (most recent call last):
  File "train.py", line 14, in <module>
    hydra_entry()
  File "/opt/conda/lib/python3.7/site-packages/hydra_core-1.2.0.dev5-py3.7.egg/hydra/main.py", line 95, in decorated_main
    config_name=config_name,
  File "/opt/conda/lib/python3.7/site-packages/hydra_core-1.2.0.dev5-py3.7.egg/hydra/_internal/utils.py", line 396, in _run_hydra
    overrides=overrides,
  File "/opt/conda/lib/python3.7/site-packages/hydra_core-1.2.0.dev5-py3.7.egg/hydra/_internal/utils.py", line 453, in _run_app
    lambda: hydra.run(
  File "/opt/conda/lib/python3.7/site-packages/hydra_core-1.2.0.dev5-py3.7.egg/hydra/_internal/utils.py", line 216, in run_and_report
    raise ex
  File "/opt/conda/lib/python3.7/site-packages/hydra_core-1.2.0.dev5-py3.7.egg/hydra/_internal/utils.py", line 213, in run_and_report
    return func()
  File "/opt/conda/lib/python3.7/site-packages/hydra_core-1.2.0.dev5-py3.7.egg/hydra/_internal/utils.py", line 456, in <lambda>
    overrides=overrides,
  File "/opt/conda/lib/python3.7/site-packages/hydra_core-1.2.0.dev5-py3.7.egg/hydra/_internal/hydra.py", line 132, in run
    _ = ret.return_value
  File "/opt/conda/lib/python3.7/site-packages/hydra_core-1.2.0.dev5-py3.7.egg/hydra/core/utils.py", line 260, in return_value
    raise self._return_value
  File "/opt/conda/lib/python3.7/site-packages/hydra_core-1.2.0.dev5-py3.7.egg/hydra/core/utils.py", line 186, in run_job
    ret.return_value = task_function(task_cfg)
  File "train.py", line 10, in hydra_entry
    main(cfg)
  File "/data/private/lightning-transformers_3/lightning-transformers/lightning_transformers/cli/train.py", line 76, in main
    logger=logger,
  File "/data/private/lightning-transformers_3/lightning-transformers/lightning_transformers/cli/train.py", line 51, in run
    data_module.setup("fit")
  File "/data/private/lightning-transformers_3/lightning-transformers/lightning_transformers/core/data.py", line 36, in setup
    dataset = self.load_dataset()
  File "/data/private/lightning-transformers_3/lightning-transformers/lightning_transformers/core/data.py", line 64, in load_dataset
    revision=self.cfg.revision,
  File "/opt/conda/lib/python3.7/site-packages/omegaconf/dictconfig.py", line 354, in __getattr__
    key=key, value=None, cause=e, type_override=ConfigAttributeError
  File "/opt/conda/lib/python3.7/site-packages/omegaconf/base.py", line 196, in _format_and_raise
    type_override=type_override,
  File "/opt/conda/lib/python3.7/site-packages/omegaconf/_utils.py", line 818, in format_and_raise
    _raise(ex, cause)
  File "/opt/conda/lib/python3.7/site-packages/omegaconf/_utils.py", line 716, in _raise
    raise ex.with_traceback(sys.exc_info()[2])  # set end OC_CAUSE=1 for full backtrace
  File "/opt/conda/lib/python3.7/site-packages/omegaconf/dictconfig.py", line 351, in __getattr__
    return self._get_impl(key=key, default_value=_DEFAULT_MARKER_)
  File "/opt/conda/lib/python3.7/site-packages/omegaconf/dictconfig.py", line 438, in _get_impl
    node = self._get_node(key=key, throw_on_missing_key=True)
  File "/opt/conda/lib/python3.7/site-packages/omegaconf/dictconfig.py", line 470, in _get_node
    raise ConfigKeyError(f"Missing key {key}")
omegaconf.errors.ConfigAttributeError: Missing key revision
    full_key: revision
    object_type=dict

Expected behavior

The training begins

The cause of error

The cause of error is that cfg argument in DataModule class (such as TransformerDataModule) is not DataConfig class (such as TransformerDataConfig) but DictConfig class when training with hydra. DictConfig instance do not have the hyper-parameter which is not specified in config file. The error occurs when DataModule is created by instantiate function. (Please refer lightning_transformers/core/instantiator.py )

I think https://github.com/PyTorchLightning/lightning-transformers/issues/236 is the same issue with me!

How to fix the error

I think there are several solutions for fixing bug. To fix the bug, i converted the DictConfig to DataConfig when creating DataModule

I changed the code from

class TransformerDataModule(pl.LightningDataModule):
    """Base ``LightningDataModule`` for HuggingFace Datasets. Provides helper functions and boilerplate logic to
    load/process datasets.

    Args:
        tokenizer: ``PreTrainedTokenizerBase`` for tokenizing data.
        cfg: Contains data specific parameters when processing/loading the dataset (Default ``HFTransformerDataConfig``)
    """

    cfg: TransformerDataConfig
    tokenizer: PreTrainedTokenizerBase

    def __init__(
        self, tokenizer: PreTrainedTokenizerBase, cfg: TransformerDataConfig = TransformerDataConfig()
    ) -> None:
        super().__init__()
        self.tokenizer = tokenizer
        self.cfg = cfg
        os.environ["TOKENIZERS_PARALLELISM"] = "TRUE"  # todo: smarter handling of this env variable

to

class TransformerDataModule(pl.LightningDataModule):
    """Base ``LightningDataModule`` for HuggingFace Datasets. Provides helper functions and boilerplate logic to
    load/process datasets.

    Args:
        tokenizer: ``PreTrainedTokenizerBase`` for tokenizing data.
        cfg: Contains data specific parameters when processing/loading the dataset (Default ``HFTransformerDataConfig``)
    """

    cfg: TransformerDataConfig
    tokenizer: PreTrainedTokenizerBase

    def __init__(
        self, tokenizer: PreTrainedTokenizerBase, cfg: Union[TransformerDataConfig, DictConfig] = TransformerDataConfig()
    ) -> None:
        super().__init__()
        self.tokenizer = tokenizer
        self.cfg = cfg if isinstance(cfg, self.__annotations__['cfg']) else self.__annotations__['cfg'](**cfg)
        os.environ["TOKENIZERS_PARALLELISM"] = "TRUE"  # todo: smarter handling of this env variable

If there are better solutions, please write a comment in this issue!

dongheehand commented 2 years ago

Another error occur when training with hydra. I fixed my code to convert the DictConfig to DataConfig when creating DataModule.

To reproduce

  1. Change the code referring above issue
  2. Run the following code
    HYDRA_FULL_ERROR=1 python train.py task=nlp/language_modeling dataset=nlp/language_modeling/wikitext trainer.devices=1 training.batch_size=8

Error Msg

    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning-1.6.3-py3.7.egg/pytorch_lightning/trainer/trainer.py", line 1215, in _run
    self.strategy.setup(self)
  File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning-1.6.3-py3.7.egg/pytorch_lightning/strategies/ddp.py", line 155, in setup
    super().setup(trainer)
  File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning-1.6.3-py3.7.egg/pytorch_lightning/strategies/strategy.py", line 139, in setup
    self.setup_optimizers(trainer)
  File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning-1.6.3-py3.7.egg/pytorch_lightning/strategies/strategy.py", line 129, in setup_optimizers
    self.lightning_module
  File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning-1.6.3-py3.7.egg/pytorch_lightning/core/optimizer.py", line 180, in _init_optimizers_and_lr_schedulers
    optim_conf = model.trainer._call_lightning_module_hook("configure_optimizers", pl_module=model)
  File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning-1.6.3-py3.7.egg/pytorch_lightning/trainer/trainer.py", line 1593, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/data/private/lightning-transformers_3/lightning-transformers/lightning_transformers/core/model.py", line 105, in configure_optimizers
    scheduler = self.instantiator.scheduler(self.scheduler_cfg, self.optimizer)
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1178, in __getattr__
    type(self).__name__, name))
AttributeError: 'LanguageModelingTransformer' object has no attribute 'optimizer'

The cause of error

TaskTransformer has no attribute 'optimizer'. But attribute 'optimizer' is used. Please refer 105-th line in lightning_transformers/core/model.py (https://github.com/PyTorchLightning/lightning-transformers/blob/master/lightning_transformers/core/model.py#L105)

How to fix the error

https://github.com/PyTorchLightning/lightning-transformers/blob/master/lightning_transformers/core/model.py#L105

        scheduler = self.instantiator.scheduler(self.scheduler_cfg, self.optimizer)

should be

        scheduler = self.instantiator.scheduler(self.scheduler_cfg, optimizer)
SeanNaren commented 2 years ago

Hi @dongheehand thank you for the super informative debug! I have a PR opened for the issue with the optimizer thank you.

Out of curiosity, would you be interested in dropping the Hydra portion to just use the classes directly? We're slowly moving towards this with #223 #246 and would be curious if the class-based approach could be useful for you. For the language modeling task this can be done via a script:

import pytorch_lightning as pl
from transformers import AutoTokenizer

from lightning_transformers.task.nlp.language_modeling import (
    LanguageModelingDataConfig,
    LanguageModelingDataModule,
    LanguageModelingTransformer,
)

if __name__ == "__main__":
    tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path="gpt2")
    model = LanguageModelingTransformer(pretrained_model_name_or_path="gpt2")
    dm = LanguageModelingDataModule(
        cfg=LanguageModelingDataConfig(
            batch_size=1,
            dataset_name="wikitext",
            dataset_config_name="wikitext-2-raw-v1",
        ),
        tokenizer=tokenizer,
    )
    trainer = pl.Trainer(accelerator="auto", devices="auto", max_epochs=1)

    trainer.fit(model, dm)

Eventually we'll get rid of the configs as well, making it even simpler.

dongheehand commented 2 years ago

Thank you! If i have a opinion about that, i will comment in https://github.com/PyTorchLightning/lightning-transformers/issues/223 .