-
https://arxiv.org/abs/2004.07159
![image](https://user-images.githubusercontent.com/42434734/98514096-4b6bb000-22ac-11eb-9eb7-c9ccd9f29ef8.png)
Abstract
: This work presents PALM with a nov…
-
I'm trying to reproduce the results from the [Lightweight convolution paper ](https://arxiv.org/pdf/1901.10430.pdf) on the abstractive summarization. I'm looking for a script or the steps to process t…
-
It has always showed the error.Could you help me solve it?
File "summarization_model.py", line 24, in
vocab,embd = loadGloVe(filename)
File "summarization_model.py", line 16, in loadGloVe
…
-
# ❓ Questions & Help
## Details
Hey , I use EncoderDecoderModel for abstractive summarization. I load the bert2bert model like this
model=EncoderDecoderModel.from_encoder_decoder_pretraine…
yhznb updated
3 years ago
-
Will this functionality be added in the near future?
-
Hi @HHousen -- we have talked in a previous issue -- the good news is that I actually got the longformer training working! But now I'm trying to speed up training by using multiple GPUs. However, I ge…
moyid updated
3 years ago
-
## ❓ Questions and Help
#### I would like to implement the PG model (as well as other models) but can't find any examples for how to go about this using fairseq.
#### What have you tried?
I…
-
# 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If us…
-
Could you share the code of the register_task of abstractive summarization? The effect of mine is not good, I don't know what the problem is.
-
I tested abstractive summarization pre-trained model using the source under transformers/examples/summarization/bertabs/...
My dataset are CNN & Daily mail, which are 30 thousands of docs.
Howev…