-
### Subject of the feature
Add an implementation of `TextRank` build with `retext`.
TextRank Paper: https://web.eecs.umich.edu/~mihalcea/papers/mihalcea.emnlp04.pdf
Some sample implementations:
…
-
@sshleifer Is there a distilled bart model (not CNN/XSUM) model available?
Thanks!
-
I'm currently following this [notebook](https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing#scrollTo=tLM3niQqhEzP) but instead I'm using `patrickvonplaten/led-large-1…
-
(A continuation of #10149 , since it looks like it's a broader issue:)
It looks like seq2seq has changed in the past week, and now gives out-of-memory errors for @stas00 's impressive recent DeepSp…
-
The `testing.rst` documentation file makes use of the `autosummary` directive: https://github.com/enthought/traits/blame/e7adc9671d24f6b2739d0170643ed547467b0055/docs/source/traits_user_manual/testing…
-
**Describe the bug**
When trying to load the roberta-base-openai-detector flavor of roberta with num_labels argument RunTimeError raises
**To Reproduce**
Steps to reproduce the behavior:
```
fr…
-
## Environment info
- `transformers` version: 4.5.0.dev0
- deepspeed version: 0.3.13
- Platform: Linux-4.15.0-66-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch vers…
-
I have succeeded in creating a GUI for a text summarization application using Beewaree, the summarization runs on a computer that has python 3.7.5 installed (I am using python version 3.7.5), but wh…
-
# 🚀 Feature request
I'd like to use bigbird sparse attention in a decoder. Isn't that feasible if we apply a causal mask [here](https://github.com/huggingface/transformers/blob/master/src/transfor…
-
I was wondering if there was anyway to fine tune the
`patrickvonplaten/longformer2roberta-cnn_dailymail-fp16` model instead of `patrickvonplaten/led-large-16384-pubmed`? When I tried fine tuning it …