Lightning-Universe / lightning-transformers

Flexible components pairing 🤗 Transformers with :zap: Pytorch Lightning
https://lightning-transformers.readthedocs.io
Apache License 2.0
607 stars 77 forks source link

basic squad example is not working #218

Closed yuvalkirstain closed 2 years ago

yuvalkirstain commented 2 years ago

🐛 Bug

running the basic example python train.py task=nlp/question_answering dataset=nlp/question_answering/squad trainer.gpus=1 gets an exception.

To Reproduce

Steps to reproduce the behavior:

  1. Run python train.py task=nlp/question_answering dataset=nlp/question_answering/squad trainer.gpus=1.
  2. See error
Error executing job with overrides: ['task=nlp/question_answering', 'dataset=nlp/question_answering/squad', 'trainer.gpus=1']
Traceback (most recent call last):
  File "train.py", line 10, in hydra_entry
    main(cfg)
  File "/home/olab/kirstain/original_lt/lightning_transformers/cli/train.py", line 69, in main
    run(
  File "/home/olab/kirstain/original_lt/lightning_transformers/cli/train.py", line 60, in run
    trainer.fit(model, datamodule=data_module)
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 552, in fit
    self._run(model)
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 922, in _run
    self._dispatch()
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 990, in _dispatch
    self.accelerator.start_training(self)
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
    self.training_type_plugin.start_training(trainer)
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
    self._results = trainer.run_stage()
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1000, in run_stage
    return self._run_train()
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1035, in _run_train
    self._run_sanity_check(self.lightning_module)
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1122, in _run_sanity_check
    self._evaluation_loop.run()
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 118, in run
    output = self.on_run_end()
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 136, in on_run_end
    self.on_evaluation_epoch_end()
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 248, in on_evaluation_epoch_end
    self.trainer.call_hook(hook_name)
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1235, in call_hook
    output = hook_fx(*args, **kwargs)
  File "/home/olab/kirstain/original_lt/lightning_transformers/task/nlp/question_answering/model.py", line 59, in on_validation_epoch_end
    metric_dict = self.metric.compute()
  File "/home/olab/kirstain/anaconda3/envs/lightning-transformers/lib/python3.8/site-packages/torchmetrics/metric.py", line 367, in wrapped_func
    self._computed = compute(*args, **kwargs)
  File "/home/olab/kirstain/original_lt/lightning_transformers/task/nlp/question_answering/datasets/squad/metric.py", line 29, in compute
    predictions, references = self.postprocess_func(predictions=predictions)
  File "/home/olab/kirstain/original_lt/lightning_transformers/task/nlp/question_answering/datasets/squad/data.py", line 46, in postprocess_func
    return post_processing_function(
  File "/home/olab/kirstain/original_lt/lightning_transformers/task/nlp/question_answering/datasets/squad/processing.py", line 179, in post_processing_function
    predictions = postprocess_qa_predictions(
  File "/home/olab/kirstain/original_lt/lightning_transformers/task/nlp/question_answering/datasets/squad/processing.py", line 247, in postprocess_qa_predictions
    assert len(predictions[0]) == len(features), f"Got {len(predictions[0])} predictions and {len(features)} features."
AssertionError: Got 32 predictions and 10822 features.

Environment

yuvalkirstain commented 2 years ago

The problem is with the validation sanity check; the evaluation stage assumes that all of the predictions are gathered (assert len(predictions[0]) == len(features) on squad/processing). Until this is properly handled one can run the script by adding: trainer.num_sanity_val_steps=0:

python train.py task=nlp/question_answering dataset=nlp/question_answering/squad trainer.num_sanity_val_steps=0

Note that there are other use-cases that will fail, like using multiple GPUs (again, it will fail at the evaluation stage).

mathemusician commented 2 years ago

This issue was also raised in #184 My solution to it should also work here. There should be a way to make -1 the default number in the yaml file used by hydra.

stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.