Closed jpcorb20 closed 3 years ago
Thanks! The model checkpoints are available actually. Check here :)
Hope to provide a pytorch version code
I might try the Huggingface's weight transfer code from tensorflow to pytorch in July if nobody's working on this post
Work has started on this, but we are still a few weeks out.
Just wanted to know when this model will be available
We're a little behind schedule. I'd say 60% by August 1, 90% by Sept 1.
this is awesome.
Very cool! Can it also be evaluated with Bert-Score?
Can't wait for this...
Converted torch checkpoints are now available on master if you build from source. Here is a list of available checkpoints. PR: #6340
Usage:
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
src_text = [
""" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
]
model_name = 'google/pegasus-xsum'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
assert tgt_text[0] == "California's largest electricity provider has turned off power to tens of thousands of customers."
Please make a new issue if you encounter a bug with the torch checkpoints and assign @sshleifer . For conceptual/how to questions, ask on discuss.huggingface.co, (you can also tag @sshleifer. )
Still TODO:
I assume these checkpoints are based on Mixed & Stochastic models, as opposed to models trained exclusively on either C4 or HugeNews?
Yes!
@sshleifer I am trying this code on Colab but running into below error. Can you let me know what is the issue?
ImportError: cannot import name 'PegasusForConditionalGeneration'
I'm having the same issue as @chetanambi
I think you need to install from source, it's not part of the latest release. (will be in the next release).
@sshleifer :
for the following model: model_name = 'google/pegasus-cnn_dailymail';
I encountered this error when running:
translated = model.generate(**batch)
'---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
@yxyzzz can you make a new issue and follow the bug-report template. I can't reproduce based on what you've provided. Thanks!
I think you need to install from source, it's not part of the latest release. (will be in the next release).
Could you please let me know how to do this. Thanks!!
@chetanambi The instructions are provided here
@sshleifer
I installed transformers from the source using the current master
branch.
I experience the following issue.
>>> from transformers import PegasusForConditionalGeneration, PegasusTokenizer
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/env5/lib/python3.6/site-packages/transformers/__init__.py", line 21, in <module>
from .configuration_albert import ALBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, AlbertConfig
File "/home/ubuntu/env5/lib/python3.6/site-packages/transformers/configuration_albert.py", line 18, in <module>
from .configuration_utils import PretrainedConfig
File "/home/ubuntu/env5/lib/python3.6/site-packages/transformers/configuration_utils.py", line 24, in <module>
from .file_utils import CONFIG_NAME, cached_path, hf_bucket_url, is_remote_url
File "/home/ubuntu/env5/lib/python3.6/site-packages/transformers/file_utils.py", line 32, in <module>
from .utils import logging
ModuleNotFoundError: No module named 'transformers.utils'
question It is the problem with the current master
. How many commits do I need to rollback to sucsessuly run PEGASUS before September release?
Thank you in advance for the info!
master fixed by #6754 .
master fixed by #6754 .
@sshleifer
(1) I confirm that master
is working now. So I was able to successfully run PEGASUS.
(2) Is there any way to control a length of a resulting summary made by PEGASUS? I would like to generate longer summaries.
(2) Is there any way to control a length of a resulting summary made by PEGASUS? I would like to generate longer summaries.
@andrei-volkau
You can (1) fine-tune PEGASUS on a customised dataset which has longer summaries (2) tune the hyper-parameter beam_alpha
which can lead to slightly longer/shorter summaries.
beam_alpha
is called "length penalty" in this repo.
Be that length_penalty
is named confusingly: (#4915)
length_penalty
will result in longer generations. length_penalty
will result in shorter generations.Is there a short finetuning example somewhere?
Nothing short. Finetuning with examples/seq2seq/finetune.py
https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune_pegasus_xsum.sh is almost ready (will be ready after #6654). To use that you should read the README.MD which covers how to format your data.
@chetanambi The instructions are provided here
I was able to run the models successfully. During the summarization I would like to run with different beam size. How can I do this?
Thanks!!
Interesting, when I ran the example in the documentation (copied below).
I got the output: California's largest electricity provider has turned off power to hundreds of thousands of customers.
Whereas the assertion output was: California's largest electricity provider has turned off power to tens of thousands of customers.
Could someone shine a light on why this might be the case and which one is the 'correct' output? I'm certain I didn't change anything.
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
import torch
src_text = [
""" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
]
model_name = 'google/pegasus-xsum'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(torch_device)
batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest').to(torch_device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
assert tgt_text[0] == "California's largest electricity provider has turned off power to tens of thousands of customers."
The docs are wrong, the code is right:
Update: I fixed the docs.
@sshleifer I am trying to implement this in a machine that is not connected to internet. So, I will have to download the model (ex: reddit-tifu) and pass the location to from_pretrained. Could you please suggest what all the files I need to download. Apperciate your help.
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-reddit_tifu")
model = AutoModelWithLMHead.from_pretrained("google/pegasus-reddit_tifu")
You can figure that out on your machine with internet by calling
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("google/pegasus-reddit_tifu")
model = AutoModelWithLMHead.from_pretrained("google/pegasus-reddit_tifu")
model.save_pretrained('local_pegasus')
tokenizer.save_pretrained('local_pegasus')
Should contain ['config.json', 'pytorch_model.bin', 'tokenizer_config.json', 'special_tokens_map.json' 'spiece.model']
Thanks @sshleifer . I was able to figure it out by looking at the implementation for from_pretrained
method. I have implemented it successfully now. Thanks !
Thanks @sshleifer for all of your efforts on this. Your & HF's work is such a big win for the NLP community, I can't thank you enough.
Out of curiosity, any sense for when TF2.0 support may go live?
Thanks. I don't have a great guess, but it will be more than a few weeks. Feel free to tinker with #5411.
Our new tensorflow maven @jplu is trying to make some big API improvements, so I am waiting for those to settle before adding (Bart, Pegasus, Marian, mBART) TF support all in one go.
🌟 New model addition
Model description
https://ai.googleblog.com/2020/06/pegasus-state-of-art-model-for.html?m=1
https://arxiv.org/abs/1912.08777
Abstract Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization. However, pre-training objectives tailored for abstractive text summarization have not been explored. Furthermore there is a lack of systematic evaluation across diverse domains. In this work, we propose pre-training large Transformer-based encoder-decoder models on massive text corpora with a new self-supervised objective. In PEGASUS, important sentences are removed/masked from an input document and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. We evaluated our best PEGASUS model on 12 downstream summarization tasks spanning news, science, stories, instructions, emails, patents, and legislative bills. Experiments demonstrate it achieves state-of-the-art performance on all 12 downstream datasets measured by ROUGE scores. Our model also shows surprising performance on low-resource summarization, surpassing previous state-of-the-art results on 6 datasets with only 1000 examples. Finally we validated our results using human evaluation and show that our model summaries achieve human performance on multiple datasets.
Open source status