-
### Name
NYUDepth
### Paper
http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf
### Data
https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html
### Motivation
Depth estim…
-
Paper reading on the following related work. Searching for usable tutorials & github repos for finetuning.
Text Generation Papers
--------------------------
http://rowanzellers.com/advice/
https://a…
-
Hi,
I created the environment by
> conda env create --file environment.yml
My environment.yml:
> name: transformersum
channels:
- conda-forge
- pytorch
dependencies:
- p…
-
Hello!
Thank you for releasing the code as an open source. It's very helpful to study "abstractive summarization"
I have two questions as follows:
1) Does **model_ranking.bin** mean **BRIO-Ctr** …
-
## Adding a Dataset
- **Name:** MeQSum
- **Description:** Question understanding is one of the main challenges in question answering. In real world applications, users often submit natural language …
-
Hi, thanks for your great work. I am wondering have you ever tried this general idea to other NLG tasks like dialogue or NMT? Hoping to get some insights from you guys !
-
The model prediction seems to have a bug and does not properly deal with the sub-words in the output.
For example this is the output obtained:
```
[[' representation', ' documents', ' mask', 'in…
-
Hi, thanks for your great work ! After reading the paper, I have a question about the `marginal-EBM`. Does `marginal-EBM` only receive N candidate translations and rank them ? How does the model figu…
-
Go from parser output to multi-sentence generator input
From 20 sentences, experiment with baselines:
- Choose first sentence, generate this as summary
- Choose 2 sentences, transform into multi-s…
-
## ❓ Questions and Help
Does BART support more than 1024 tokens in inference of summarization task?
For the long text like novel, does BART use all of the input to generate summary?
or just use fir…