-
https://virtual2023.aclweb.org/paper_P5706.html
-
hello , In your paper ,we can see
`BERT embedding uses the pre-trained BERT to generate word vectors of sequence. In order to facilitate the training and fine-tuning of BERT model, we transform the …
-
Hi @nreimers
I would like to use Bert-base and Bert-large versions cross-encoder trained on ms-marco. I tried to fine-tune `"cross-encoder/ms-marco-MiniLM-L-12-v2"` on NQ and other standard datase…
-
## 집현전 초급반 스터디
- 2022년 7월 31일 일요일 8시
- 김민경님 손규진님 이인규님 발표
- https://wikidocs.net/book/2155
- https://github.com/ukairia777/tensorflow-nlp-tutorial
-
After I run `run_classifier.py`,I see the fine tuning result model(1.2GB) is such bigger than the pretrained BERT model(390MB).
The model which is saved by fine tuning :
![01](https://user-images.…
-
Hello,
I am wondering did you fine tuning BERT in the encoder in your abstracitve summarizer and BERTSUM model? (or you just used the pre-trained model)
Thank you!
-
### Search before asking
- [X] I searched the [issues](https://github.com/IBM/data-prep-lab/issues) and found no similar issues.
### Component
Transforms/code/code_quality
### Feature
Goal is to…
-
**Issue by [ThangPM](https://github.com/ThangPM)**
_Saturday Jun 27, 2020 at 16:53 GMT_
_Originally opened as https://github.com/nyu-mll/jiant/issues/1099_
----
Hello,
I am trying to reproduce r…
-
I have done the fine-tuning on classifing text and get a good result. now I want to use it to classify text. I write some code about estimator and find it will restore the model each time. I don't ge…
-
### Author Pages
https://aclanthology.org/people/z/zihan-zhang/
### Type of Author Metadata Correction
- [X] The author page wrongly conflates different people with the same name.
- [ ] This author…