-
I tested the LSA, KL, TextRank, LexRank, and SumBasic summarizers on a 10,000-word-ish document with wide-ranging internal topics. I tested them on the document as-is and also with its sentences pars…
-
- Repository: https://github.com/sbcblab/geva
- [x] I understand that by submitting my package to _Bioconductor_,
the package source and all review commentary are visible to the
general publi…
-
# 🐛 Bug
https://github.com/huggingface/transformers/tree/master/examples/summarization/bertabs
## Information
Bertabs Rouge1/2 F1 evaluation numbers that I am getting are much less than in their …
-
Trying to find the **Marian version used to train Marian MT transformers**. This would help me understand the **benchmarks for translation times**. I see that multiple papers mention various benchmark…
-
# ❓ Questions & Help
I have been attempting with various models to try to build an encoder-decoder, sequence to sequence transformer model. For the most part, I have been using BERT (bert-base-case…
-
- `transformers` version: 3.0.2
- Platform:
- Python version: 3.6
- PyTorch version (GPU?): 1.4
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Single GPU
#…
-
Hi, I want to finetune an encode-decoder model to train on a parallel dataset(something like translation) and I'm not sure what should I do. I read [this](https://medium.com/huggingface/encoder-decode…
-
This is based on discussion on:
https://discuss.elastic.co/t/collecting-custom-metrics-for-vsphere-module/100278
- Version: Metricbeat 6.0.0-beta2
- Metricbeat Module: vSphere
- Source OS: CentO…
-
Currently BART model trained on CNN dataset is generating summaries which consist of new nouns which are not present in the input text.
How to control the randomness of these summaries. Is there any …
-
After essential data entry implementation and testing, we will freeze data entry in the current spreadsheet, import all current data to bugsigdb.org, then resume all data entry there. We need to plan …