HHousen / TransformerSum

Models to perform neural summarization (extractive and abstractive) using machine learning transformers and a tool to convert abstractive summarization datasets to the extractive task.
https://transformersum.rtfd.io
GNU General Public License v3.0
428 stars 58 forks source link

AttributeError: [MODEL_CONFIG] object has no attribute 'encoder' #55

Closed JoachimJaafar closed 3 years ago

JoachimJaafar commented 3 years ago

I'm trying to run this command : python main.py --mode abstractive --model_name_or_path t5-base --cache_file_path data --max_epochs 4 --do_train --do_test --batch_size 1 --weights_save_path model_weights --no_wandb_logger_log_model --accumulate_grad_batches 5 --use_scheduler linear --warmup_steps 8000 --gradient_clip_val 1.0 --custom_checkpoint_every_n 10000 --gpus 1 --dataset ../data_chunk.*

My env :

name: transformersum
channels:
    - conda-forge
    - pytorch
dependencies:
    - pytorch
    - scikit-learn
    - tensorboard
    - spacy
    - spacy-model-en_core_web_sm
    - sphinx
    - pyarrow=4
    - pip
    - pip:
      - pytorch_lightning
      - transformers==4.8.0
      - torch_optimizer
      - click==7.0
      - wandb
      - rouge-score
      - packaging
      - datasets
      - gdown
      - gradio
      - torch==1.8.1+cu111
      - torchvision==0.9.1+cu111
      - torchaudio==0.8.1
      - -f https://download.pytorch.org/whl/torch_stable.html
variables:
    TOKENIZERS_PARALLELISM: true
    LC_ALL: C.UTF-8
    LANG: C.UTF-8

Whenever I'm running the command with --model_name_or_path different than bert-base-uncased (for instance t5-base), I'm getting this error :

  File "main.py", line 457, in <module>
    main(main_args)
  File "main.py", line 119, in main
    trainer.fit(model)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
    self._run(model)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run
    self._dispatch()
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch
    self.accelerator.start_training(self)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
    self.training_type_plugin.start_training(trainer)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
    self._results = trainer.run_stage()
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage
    return self._run_train()
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_train
    self._run_sanity_check(self.lightning_module)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1115, in _run_sanity_check
    self._evaluation_loop.run()
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/loops/base.py", line 111, in run
    self.advance(*args, **kwargs)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 111, in advance
    dataloader_iter, self.current_dataloader_idx, dl_max_batches, self.num_dataloaders
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/loops/base.py", line 111, in run
    self.advance(*args, **kwargs)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 110, in advance
    output = self.evaluation_step(batch, batch_idx, dataloader_idx)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 154, in evaluation_step
    output = self.trainer.accelerator.validation_step(step_kwargs)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 211, in validation_step
    return self.training_type_plugin.validation_step(*step_kwargs.values())
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 178, in validation_step
    return self.model.validation_step(*args, **kwargs)
  File "/root/project/test/TransformerSum/src/abstractive.py", line 704, in validation_step
    cross_entropy_loss = self._step(batch)
  File "/root/project/test/TransformerSum/src/abstractive.py", line 689, in _step
    outputs = self.forward(source, target, source_mask, target_mask, labels=labels)
  File "/root/project/test/TransformerSum/src/abstractive.py", line 254, in forward
    loss = self.calculate_loss(prediction_scores, labels)
  File "/root/project/test/TransformerSum/src/abstractive.py", line 669, in calculate_loss
    prediction_scores.view(-1, self.model.config.encoder.vocab_size), labels.view(-1)
AttributeError: 'T5Config' object has no attribute 'encoder'
HHousen commented 3 years ago

I believe that you have an old version of TransformerSum. Try using git pull to update to the latest version. This bug is already fixed in abstractive.py: https://github.com/HHousen/TransformerSum/blob/master/src/abstractive.py#L674.

JoachimJaafar commented 3 years ago

Thanks, now it looks like I have another problem with the same line during the validation sanity check :

Traceback (most recent call last):
  File "main.py", line 490, in <module>
    main(main_args)
  File "main.py", line 125, in main
    trainer.fit(model)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit
    self._run(model)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run
    self._dispatch()
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch
    self.accelerator.start_training(self)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training
    self.training_type_plugin.start_training(trainer)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training
    self._results = trainer.run_stage()
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage
    return self._run_train()
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1031, in _run_train
    self._run_sanity_check(self.lightning_module)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1115, in _run_sanity_check
    self._evaluation_loop.run()
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/loops/base.py", line 111, in run
    self.advance(*args, **kwargs)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 111, in advance
    dataloader_iter, self.current_dataloader_idx, dl_max_batches, self.num_dataloaders
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/loops/base.py", line 111, in run
    self.advance(*args, **kwargs)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 110, in advance
    output = self.evaluation_step(batch, batch_idx, dataloader_idx)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 154, in evaluation_step
    output = self.trainer.accelerator.validation_step(step_kwargs)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 211, in validation_step
    return self.training_type_plugin.validation_step(*step_kwargs.values())
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 178, in validation_step
    return self.model.validation_step(*args, **kwargs)
  File "/root/project/test/TransformerSum/src/abstractive.py", line 709, in validation_step
    cross_entropy_loss = self._step(batch)
  File "/root/project/test/TransformerSum/src/abstractive.py", line 694, in _step
    outputs = self.forward(source, target, source_mask, target_mask, labels=labels)
  File "/root/project/test/TransformerSum/src/abstractive.py", line 256, in forward
    loss = self.calculate_loss(prediction_scores, labels)
  File "/root/project/test/TransformerSum/src/abstractive.py", line 674, in calculate_loss
    prediction_scores.view(-1, self.model.config.vocab_size), labels.view(-1)
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/root/project/test/TransformerSum/src/helpers.py", line 282, in forward
    return F.kl_div(output, model_prob, reduction="batchmean")
  File "/opt/conda/envs/transformersum/lib/python3.6/site-packages/torch/nn/functional.py", line 2622, in kl_div
    reduced = torch.kl_div(input, target, reduction_enum, log_target=log_target)
RuntimeError: The size of tensor a (32100) must match the size of tensor b (32128) at non-singleton dimension 1
azouiaymen commented 3 years ago

i have the same problem

HHousen commented 3 years ago

Hello @JoachimJaafar and @azouiaymen. This issue is difficult to debug since there are a lot of possible causes. However, I may have a solution. Try changing that line (TransformerSum/src/abstractive.py line 674) to this: prediction_scores.view(-1, prediction_scores.size(-1)), labels.view(-1). If this works for you, I'll merge the change into the master branch.

For reference, this line is commonly used:

JoachimJaafar commented 3 years ago

I unfortunately still get the same error, what about you @azouiaymen ?

azouiaymen commented 3 years ago

I tried it, but still don't work. starting to lose faith @JoachimJaafar

HHousen commented 3 years ago

Hello. It's possible that the issue is with the LabelSmoothingLoss class that I copied from OpenNMT. Can you try setting --label_smoothing 0 in your command to try to fix the issue? Thanks.

azouiaymen commented 3 years ago

it works , thanks !!

HHousen commented 3 years ago

I was able to run 50 steps with the t5-base model when --label_smoothing was set to 0. For some reason the LabelSmoothingLoss is failing so TransformerSum needs a more robust implementation.

JoachimJaafar commented 3 years ago

I can confirm, that was indeed the problem. Thanks !