Lightning-Universe / lightning-transformers

Flexible components pairing 🤗 Transformers with :zap: Pytorch Lightning
https://lightning-transformers.readthedocs.io
Apache License 2.0
610 stars 77 forks source link

I see AssertionError when running QA example command. #184

Closed kyoungrok0517 closed 2 years ago

kyoungrok0517 commented 3 years ago

🐛 Bug

Hello. I see the following error when running Question Answering example.

Traceback (most recent call last):
  File "train.py", line 10, in hydra_entry
    main(cfg)
  File "/data/Code/lightning-transformers/lightning_transformers/cli/train.py", line 70, in main
    run(
  File "/data/Code/lightning-transformers/lightning_transformers/cli/train.py", line 61, in run
    trainer.fit(model, datamodule=data_module)
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 460, in fit
    self._run(model)
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 758, in _run
    self.dispatch()
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 799, in dispatch
    self.accelerator.start_training(self)
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
    self.training_type_plugin.start_training(trainer)
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
    self._results = trainer.run_stage()
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in run_stage
    return self.run_train()
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 844, in run_train
    self.run_sanity_check(self.lightning_module)
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1112, in run_sanity_check
Traceback (most recent call last):
  File "train.py", line 10, in hydra_entry
    main(cfg)
  File "/data/Code/lightning-transformers/lightning_transformers/cli/train.py", line 70, in main
    run(
  File "/data/Code/lightning-transformers/lightning_transformers/cli/train.py", line 61, in run
    trainer.fit(model, datamodule=data_module)
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 460, in fit
    self._run(model)
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 758, in _run
    self.dispatch()
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 799, in dispatch
    self.accelerator.start_training(self)
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
    self.training_type_plugin.start_training(trainer)
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
    self._results = trainer.run_stage()
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in run_stage
    return self.run_train()
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 844, in run_train
    self.run_sanity_check(self.lightning_module)
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1112, in run_sanity_check
    self.run_evaluation()
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_evaluation
    self.evaluation_loop.on_evaluation_epoch_end()
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 265, in on_evaluation_epoch_end
    model_hook_fx()
  File "/data/Code/lightning-transformers/lightning_transformers/task/nlp/question_answering/model.py", line 60, in on_validation_epoch_end
    metric_dict = self.metric.compute()
  File "/home/kyoungrok/anaconda3/envs/odqa/lib/python3.8/site-packages/torchmetrics/metric.py", line 370, in wrapped_func
    self._computed = compute(*args, **kwargs)
  File "/data/Code/lightning-transformers/lightning_transformers/task/nlp/question_answering/datasets/squad/metric.py", line 28, in compute
    predictions, references = self.postprocess_func(predictions=predictions)
  File "/data/Code/lightning-transformers/lightning_transformers/task/nlp/question_answering/datasets/squad/data.py", line 47, in postprocess_func
    return post_processing_function(
  File "/data/Code/lightning-transformers/lightning_transformers/task/nlp/question_answering/datasets/squad/processing.py", line 179, in post_processing_function
    predictions = postprocess_qa_predictions(
  File "/data/Code/lightning-transformers/lightning_transformers/task/nlp/question_answering/datasets/squad/processing.py", line 247, in postprocess_qa_predictions
    assert len(predictions[0]) == len(features), f"Got {len(predictions[0])} predictions and {len(features)} features."
AssertionError: Got 32 predictions and 10822 features.

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

To Reproduce

Run the example code.

python train.py task=nlp/question_answering dataset=nlp/question_answering/squad trainer.gpus=1

Environment

PyTorch version: 1.9.0
Is debug build: False
CUDA used to build PyTorch: 11.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31

Python version: 3.8.10 (default, Jun  4 2021, 15:09:15)  [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.8.0-59-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: RTX A6000
GPU 1: RTX A6000

Nvidia driver version: 460.80
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.20.3
[pip3] pytorch-lightning==1.3.8
[pip3] torch==1.9.0
[pip3] torchaudio==0.9.0a0+33b2469
[pip3] torchmetrics==0.4.1
[pip3] torchvision==0.10.0
[conda] blas                      1.0                         mkl
[conda] cudatoolkit               11.1.74              h6bb024c_0    nvidia
[conda] ffmpeg                    4.3                  hf484d3e_0    pytorch
[conda] mkl                       2021.3.0           h06a4308_520
[conda] mkl-service               2.4.0            py38h7f8727e_0
[conda] mkl_fft                   1.3.0            py38h42c9631_2
[conda] mkl_random                1.2.2            py38h51133e4_0
[conda] numpy                     1.20.3           py38hf144106_0
[conda] numpy-base                1.20.3           py38h74d4b33_0
[conda] pytorch                   1.9.0           py3.8_cuda11.1_cudnn8.0.5_0    pytorch
[conda] pytorch-lightning         1.3.8                    pypi_0    pypi
[conda] torchaudio                0.9.0                      py38    pytorch
[conda] torchmetrics              0.4.1                    pypi_0    pypi
[conda] torchvision               0.10.0               py38_cu111    pytorch
mathemusician commented 2 years ago

This bug took me quite a while to solve. The clue to solving this is in the last step: Got 32 predictions and 10822 features. The trainer actually goes through 2 validation steps before training (lightning does this automatically), and since the batch size is 16 => 16 times 2 is 32. SquadMetric stores a tensor list of the predictions and adds to it during each validation step. There is a preprocessing step after the validation ends, but it expects to compute metrics on all 10822 questions. Because the validation loop yielded only 32 predictions, you get an error.

The solution is to add this extra command to your hydra CLI:

trainer.num_sanity_val_steps=-1

This tells pytorch-lightning to go through the entire validation set, giving you 10822 predictions and 10822 features. More on that here.

The reason there's a whole preprocessing step is because of legacy code. This code can be refactored, but it's more convincing when benchmarking against the original transformers library when you can just copy/paste their code for metrics.

stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.