jbrry / Irish-BERT

Repository to store helper scripts for creating an Irish BERT model.
Other
9 stars 0 forks source link

Include unused entries in vocabulary of "from scratch" models #41

Closed jowagner closed 3 years ago

jowagner commented 3 years ago

As discussed in issue #33, having a few unused entries in the vocabulary is a great idea to make it easier for users of a model to add extra tokens for fine-tuning. We should do this as well when training our final "from scratch" models. Multilingual BERT provides 99 such entries. We should use the same number of entries and use the same ["[unused%d]" %i for i in range(99)] format.

jbrry commented 3 years ago

Good point. I checked the vocab.txt files produced by wiki-bert-pipeline and I can confirm that they follow the same approach:

head -n 106 vocab.txt

[unused..]
[unused98]
[unused99]
[UNK]
[CLS]
[SEP]
[MASK]
a
alanagiasi commented 3 years ago

As discussed in issue #33, having a few unused entries in the vocabulary is a great idea to make it easier for users of a model to add extra tokens for fine-tuning. We should do this as well when training our final "from scratch" models. Multilingual BERT provides 99 such entries. We should use the same number of entries and use the same ["[unused%d]" %i for i in range(99)] format.

Just to note a tiny boundary issue there. The stop argument in the range() function is exclusive, so to cover 0-99 is ["[unused%d]" %i for i in range(100)] Also to note the first token is typically [PAD], followed by ["[unused%d]" %i for i in range(100)], followed by [UNK] [CLS] [SEP] [MASK] etc.

Sorry if that seems pedantic to point out, I had the opposite in R recently since it's c() function is inclusive :^)

jowagner commented 3 years ago

@alanagiasi Your code would produce 100 entries, one more than previous work quoted in https://github.com/jbrry/Irish-BERT/issues/33#issuecomment-734410738.

As to the placement of [PAD] and other special tokens, I agree it is best to follow what the existing tools do as otherwise it may cause problems with other software, including software that we currently are not using.

As to confusions when moving between programming languages, I like to write code in a way that makes things clear even when the reader doesn't know the language specific details, e.g. instead of the above I would write something like

number_of_entries = 99
list_of_entries = []
for entry_index in range(number_of_entries):
    list_of_entries.append('[unused%d]' %entry_index)

where

alanagiasi commented 3 years ago

@jowagner yes it produces 100 entries, I double checked the vocabulary file on Google Drive and it has 100 entries i.e. unused0 to unused99. Thanks for the comment you linked earlier, I double checked mBERT (HuggingFace implementation) and it has 99 entries i.e. unused1 to unused99 which agrees with the Chau et al., (2020) paper James cited.

@jbrry How was the vocab.txt file on Google Drive generated, do you happen to know if there are options to specify the number of `unused' tokens etc?

jbrry commented 3 years ago

@alanagiasi, Yes the file used to create the vocabulary in wiki-bert-pipeline can be seen here. It populates the unused entries as well as the padding and special tokens: https://github.com/spyysalo/sent2wordpiece/blob/47ba44e4bb4faa50bc617a7da93987f94a934d3f/sent2wordpiece.py

jowagner commented 3 years ago

Ok, so the answer is yes, we have those unused entries in our "from scratch" models. Nothing to do. Closing.