huggingface / transformers

πŸ€— Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
135.02k stars 27.01k forks source link

Seq2Seq training calculate_rouge with precision and recall #9103

Closed marcoabrate closed 3 years ago

marcoabrate commented 3 years ago

Environment info

Who can help

Trainer: @sgugger examples/seq2seq: @patil-suraj

Information

Model I am using (Bert, XLNet ...): bart-base

The tasks I am working on is: summarization on XSUM

To reproduce

Change calculate_rouge function in utils.py with return_precision_and_recall=True. Fine-tune any seq2seq model with the official script finetune.py:

!python3 $finetune_script \
--model_name_or_path facebook/bart-base \
--tokenizer_name facebook/bart-base \
--data_dir $data_dir \
--learning_rate 3e-5 --label_smoothing 0.1 --num_train_epochs 2 \
--sortish_sampler --freeze_embeds --adafactor \
--task summarization \
--do_train \
--max_source_length 1024 \
--max_target_length 60 \
--val_max_target_length 60 \
--test_max_target_length 100 \
--n_train 8 --n_val 2 \
--train_batch_size 2 --eval_batch_size 2 \
--eval_beams 2 \
--val_check_interval 0.5 \
--log_every_n_steps 1 \
--logger_name wandb \
--output_dir $output_dir \
--overwrite_output_dir \
--gpus 1

Throws the error

Validation sanity check: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:01<00:00,  1.67s/it]Traceback (most recent call last):
  File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/finetune.py", line 443, in <module>
    main(args)
  File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/finetune.py", line 418, in main
    logger=logger,
  File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/lightning_base.py", line 389, in generic_train
    trainer.fit(model)
  File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit
    results = self.accelerator_backend.train()
  File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 54, in train
    results = self.train_or_test()
  File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 68, in train_or_test
    results = self.trainer.train()
  File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 462, in train
    self.run_sanity_check(self.get_model())
  File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 650, in run_sanity_check
    _, eval_results = self.run_evaluation(test_mode=False, max_batches=self.num_sanity_val_batches)
  File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 597, in run_evaluation
    num_dataloaders=len(dataloaders)
  File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 196, in evaluation_epoch_end
    deprecated_results = self.__run_eval_epoch_end(num_dataloaders, using_eval_result)
  File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/evaluation_loop.py", line 247, in __run_eval_epoch_end
    eval_results = model.validation_epoch_end(eval_results)
  File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/finetune.py", line 190, in validation_epoch_end
    k: np.array([x[k] for x in outputs]).mean() for k in self.metric_names + ["gen_time", "gen_len"]
  File "/content/drive/My Drive/MAGMA: Summarization/seq2seq/finetune.py", line 190, in <dictcomp>
    k: np.array([x[k] for x in outputs]).mean() for k in self.metric_names + ["gen_time", "gen_len"]
  File "/usr/local/lib/python3.6/dist-packages/numpy/core/_methods.py", line 163, in _mean
    ret = ret / rcount
TypeError: unsupported operand type(s) for /: 'dict' and 'int'

From my understanding self.metric_names should be a list.

sgugger commented 3 years ago

Hi there. Please note that this script is not maintained anymore and is provided as is. We only maintain the finetune_trainer.py script now.

marcoabrate commented 3 years ago

Ok, I will switch to that one. Thank you