-
Hi again,
I was finetuning some data with **--max-source-positions 1024 --max-target-positions 1024**.
But it paused at **epoch 001: 8%**.
and showed: **WARNING: overflow detected, setting…
-
BART documentation for abstractive summarization assigned max_tokens as 2048 (I think that is the maximum number of tokens that a batch can handle). So what is the length of the input?
If I want to…
-
**Describe**
Model I am using (UniLM, MiniLM, LayoutLM ...):
Team,
1. Has the pre-trained MiniLM for Question Generation been Released? If not, can you please share and if feasible, also shar…
-
https://arxiv.org/pdf/1912.08777.pdf
-
# 🚀 Feature request
While the abstractive text summarization with T5 and Bart already achieve impressive results, it would be great to add support for state-of-the-art **extractive** text summarizati…
-
My hardware doesnt support training this model. Google colab is giving memory issues. Can anyone pls send a drive link with model or anything.
-
文本摘要里这篇论文的链接错了:
A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents - Arman Cohan(2018)
点进去是GENERATING WIKIPEDIA BY SUMMARIZING LONG SEQUENCES - Peter J. Liu(2018)的
-
Hello, I am new to whole NLP world and PyTorch. I am trying to learn the concepts and that is taking some time for a rookie. I have a project to finish and I want to implement transformers & BERT on m…
-
# 🐛 Bug
## Information
Model I am using: Bert abs (https://github.com/huggingface/transformers/tree/master/examples/seq2seq/bertabs)
-> remi/bertabs-finetuned-cnndm-extractive-abstractive-summa…
-
Hey If I just pass a tf-record with one example with features inputs and targets. Evaluate execution has happened and then after finishing just text_metrics-2-.dev.txt file is created and the predicti…