stefan-it / turkish-bert

Turkish BERT/DistilBERT, ELECTRA and ConvBERT models
482 stars 42 forks source link

BERTurk Training Dataset Preparation #28

Open kaansonmezoz opened 2 years ago

kaansonmezoz commented 2 years ago

Hello Stefan,

I'm going to train another BERT model with different pre-training object from scratch. Then I will use it to compare with BERTurk and other Turkish pre-trained language models. In order to evaluate pre-training task impact properly, the model should be trained with similar data and parameters.

In the README file it was state that:

The current version of the model is trained on a filtered and sentence segmented version of the Turkish OSCAR corpus, a recent Wikipedia dump, various OPUS corpora and a special corpus provided by Kemal Oflazer.

I've already collected Kemal Oflazer's and OSCAR's corpus. But there are things I'm curious about. If you can answer them, I will be happy πŸ™‚

  1. Did you apply filtering and sentence segmentation only to OSCAR corpus or did you apply it to others too ?
  2. What kind of filtering did you apply ? Was it like removing sentences with less than 5 tokens from the corpus ?
  3. Have you used only full stop for sentence segmentation ?
  4. Do you remember which Wikipedia dump has been used ?
  5. Which OPUS corpora have you used ? There are plenty of datasets in OPUS. There are even datasets from Wikipedia such as WikiMatrix v1, Wikipedia and wikimedia v20210402. Did you use them too ?
  6. Did you apply extra pre-processing methods a part from BertTokenizer's ?

Also, if you have the public datasets' corpora, do you mind sharing it ? It would make things a lot easier for me and save me from the trouble πŸ™‚

Thanks in advance πŸ™‚

stefan-it commented 2 years ago

Hi @kaansonmezoz ,

thanks for your interest in our models :hugs:

  1. The complete training corpus was filtered and sentence segmented, basically with:
from nltk.tokenize import sent_tokenize

for sent in sent_tokenize(line, "turkish"):
  if len(sent.split()) > 5:
    print(sent)

So it is not only applied to the OSCAR subcorpus here.

  1. I used sentences longer than 5 tokens (split on whitespaces), see above :)

  2. Not only full stops are considered for sentence segmentation, NLTK has some more tokens to be considered.

  3. I just looked it up in my "data lake", the trwiki-latest-pages-articles.xml.bz2 dump has a 480M 2. Feb 2020 timestamp.

  4. I could found the following OPUS-related files:

bible-uedin.txt GNOME.txt JW300.txt  OpenSubtitles.txt  opus.all QED.txt  SETIMES.txt  Tanzil.txt  Tatoeba.txt  TED2013.txt  Wikipedia.txt

With a timestamp of 3. Feb 2020.

  1. For pre-processing (of pre-training data) the official BERT implementation was used, so basically all pre-processing steps can be found here: https://github.com/google-research/bert/blob/master/tokenization.py#L161-L182, so first a basic tokenization step is done, followed by the wordpiece stuff. I did not add extra steps.

Please just give me your mail addresse and I can immediately send you the link to the corpus used for pre-training :hugs:

hazalturkmen commented 2 years ago

Hi @stefan-it, Can i get the links to the corpus used for pre-training? thanks,

stefan-it commented 2 years ago

Hey @hazalturkmen , no problem, just give me an email addresse where I can contact you :hugs:

hazalturkmen commented 2 years ago

Thanks, @stefan-it , Here is my email address:

hazalturkmen91@gmail.com

kaansonmezoz commented 2 years ago

@stefan-it Thank you for detailed explanation. My email is sonmezozkaan@gmail.com πŸ™‚

You are a life saver ! ❀️

stefan-it commented 2 years ago

Mails are out :hugs: