-
is there any documentation available for the same?
-
Hello! I tried to train a Bert2Bert model for QA generation, however when I try the generate function it returns gibberish. I also tried using the example code below, and that also generated gibberish…
-
# ❓ Questions & Help
## Details
HI,
I've trained a bert2bert model to generate answers with different questions.
But after training, the bert2bert model always produces the same encode…
-
Given the huge number of parameters in Bert, I wonder whether it is at all feasible to fine tune on GPUs without going to the google cloud TPU offers. Has there been any benchmarking on the current im…
-
I can use run_generation.py to create a statement by adding context.
But is there a way to do fine-tuning based on condition (context)?
For example, when data of "context [SEP] sentence" is input,…
-
The `BertModel.forward()` method does not expect a `lm_labels` and `masked_lm_labels` arguments. Yet, it looks like the `EncoderDecoderModel.forward()` method calls it's decoder's `forward()` method w…
-
# ❓ Questions & Help
## Details
Hello, I'm trying to using seq2seq model (such as bart and EncoderDecoderModel(bert2bert))
And I'm little bit confused about input_ids, decoder_input_ids, tg…
-
Hi, there
Is there probability to build a model of the BERT as the encoder and the transformer as the decoder?
Thanks.
-
# ❓ Questions & Help
## Details
### How to train models for text generation
Hi Everyone,
I am trying to finetune an Encoder Decoder model on question generation task on SQuAD.
Input da…
-
## ❓ Questions & Help
Hi 👋
I'm trying to build a dialogue system which should reply based on a history, memory (which is represented as a string) and a confidence if the memory content is correct an…