codertimo / BERT-pytorch

Google AI 2018 BERT pytorch implementation
Apache License 2.0
6.11k stars 1.29k forks source link

Making Book Corpus #43

Open codertimo opened 5 years ago

codertimo commented 5 years ago

Building the same corpus with original paper. Please share your tips to preprocess and download the file. It would be great to share preprocessed data using dropbox or google drive etc.

codertimo commented 5 years ago

32

mapingshuo commented 5 years ago

The original paper (BERT) use "the concatenation of BooksCorpus (800M words) (Zhu et al., 2015) and English Wikipedia (2,500M words)." what do you mean "Movie Corpus"?

codertimo commented 5 years ago

@mapingshuo Sorry It's my fault. haha I just made that title in 5seconds :) thank you!! 👍

mapingshuo commented 5 years ago

That's okay, I am looking for a valid Book Corpus too.

Henry-E commented 5 years ago

Both GPT and BERT were trained on bookscorpus. Presumably there's a private copy people are passing about. There's some web scrapers out there designed for recreating the bookscorpus but this repetition of work seems unnecessary. If anyone finds a copy, do let me know!