-
Hello,
I am wondering did you fine tuning BERT in the encoder in your abstracitve summarizer and BERTSUM model? (or you just used the pre-trained model)
Thank you!
-
Hi @nreimers
I would like to use Bert-base and Bert-large versions cross-encoder trained on ms-marco. I tried to fine-tune `"cross-encoder/ms-marco-MiniLM-L-12-v2"` on NQ and other standard datase…
-
**Issue by [ThangPM](https://github.com/ThangPM)**
_Saturday Jun 27, 2020 at 16:53 GMT_
_Originally opened as https://github.com/nyu-mll/jiant/issues/1099_
----
Hello,
I am trying to reproduce r…
-
I have done the fine-tuning on classifing text and get a good result. now I want to use it to classify text. I write some code about estimator and find it will restore the model each time. I don't ge…
-
Hi,
I'm trying to fine-tuning bert using [Bert fine-tuning](https://www.kaggle.com/yuval6967/toxic-bert-plain-vanila/log).
My problem is: after using apex, the GPU memory usage is reduced, but…
-
Hi guys,
I am following the Megatron-LM example to pre-train a BERT model but I'm getting this error:
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/root/Megatron-LM/pretrai…
-
Hi, I have read your new paper "12-in-1: Multi-Task Vision and Language Representation Learning" on Arxiv, which utilizes multi-task fine-tuning to boost the performance of Vil-BERT. May I ask whether…
-
How could SBERT (before fine-tuning) work without supervision? I don't quite understand the content in the paper section 4.
I mean, is it fair to compare BERT without fine-tuning and SBERT after fi…
-
The scope of #2 should be broken down into two parts:
1) Get something running
2) Fine tuning experiments
In order to run fine tuning experiments:
Preprocessing script: Takes `CleanedRapCorpus` …
-