-
Related to **Model/Framework(s)**
[BERT/TensorFlow]
**Describe the bug**
I am trying to download the datasets but getting the below errors. So how can I verify the datasets were complete and pro…
-
Hi, I'm trying to pretrain Bert Large, I'm trying to download and preprocess the data.
I have multiple issues:
**1.Downloading-** I'm getting very low amount of valid links in BookCorpus, after down…
-
Hi friends, I need some help. I downloaded the wikipedia and bookcorpus for pre-training BERT large model. I did the same data preprocessing and pre-training task as create_datasets_from_start.sh and …
-
Is there any Model trained on Wikipedia Articles?
-
Will you no longer provide a list of Book Corpus download links?
-
Hi, thank you for making this code available !
I can download the en_corpus .dioc files (bookcorpus, wikipedia) from my PC, but I want them on a server an have to use a bash wget command; when I use …
-
Just a general question I guess, but after inspecting the vocab.txt it doesn't seem to be particularly biomedically related (seems like its the old one) is this correct?
I'm trying to use these pre…
-
Is your pretrained data for text summerization Wikipekia + BookCorpus or something else?
-
## 어떤 내용의 논문인가요? 👋
다양한 데이터를 이용해 사전학습한 모델들이 최근 NLP의 토대를 만들어가는 상황에서, 특정 도메인이나 테스크에 더 근접한 데이터로 추가 사전학습을 한다면 성능이 더 좋아지는지에 대한 궁금증을 실험을 통해 풀어보는 논문.
## Abstract (요약) 🕵🏻♂️
Language models pretr…
-
### System information
- **What is the top-level directory of the model you are using**:
skip_thoughts
- **Have I written custom code (as opposed to using a stock example script provided in TensorF…