iPieter / RobBERT

A Dutch RoBERTa-based language model
https://pieter.ai/robbert/
MIT License
196 stars 29 forks source link

Dutch tokenizer behaves unexpectedly #16

Closed sdblanc closed 3 years ago

sdblanc commented 3 years ago

The problem

When running the code below, the result of the tokenizer is somewhat strange. Some weird characters seem to be introduced by the tokenization which leads to the fact that following tasks (e.g. MLM) have a poor performance.

Code:

from transformers import RobertaTokenizer, 
tokenizer = RobertaTokenizer.from_pretrained("pdelobelle/robbert-v2-dutch-base",do_lower_case=True)
sentence = "ik zie een boom in mijn tuin."
tokenized_text = tokenizer.tokenize(sentence)

Result: ['ik', 'Ġzie', 'Ġeen', 'Ġboom', 'Ġin', 'Ġmijn', 'Ġtuin', '.']

Similar code that uses default BERT Tokenizer

However, when using the exact same code but based on a default BERT tokenizer, the code does seem to work fine.

Code:

from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", do_lower_case=True)
sentence = "I work as a motorbike stunt rider - that is, I do tricks on my motorbike at shows."
tokenized_text = tokenizer.tokenize(sentence)

Result: [ 'i', 'work', 'as', 'a', 'motor', '##bi', '##ke', 'stunt', 'rider', '-', 'that', 'is', ',', 'i', 'do', 'tricks', 'on', 'my', 'motor', '##bi', '##ke', 'at', 'shows', '.']

Question

Why is this , and how can it be solved? Thanks in advance!

iPieter commented 3 years ago

RobBERT uses a different tokenizer than BERT. We based our tokenizer on BPE, just like RobBERTa and GPT-2, and BERT uses WordPiece. There is a difference between the separation Ġ or merge tokens ## between both tokenization strategies. As an illustration, this is the RobBERTa tokenizer. Notice there is no merge token between Ġmotor and bike.

from transformers import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained("roberta-base", do_lower_case=True)
sentence = "I work as a motorbike stunt rider - that is, I do tricks on my motorbike at shows."
tokenized_text = tokenizer.tokenize(sentence)
['i', 'Ġwork', 'Ġas', 'Ġa', 'Ġmotor', 'bike', 'Ġstunt', 'Ġrider', 'Ġ-', 'Ġthat', 'Ġis', ',', 'Ġi', 'Ġdo', 'Ġtricks', 'Ġon', 'Ġmy', 'Ġmotor', 'bike', 'Ġat', 'Ġshows', '.']

This is the reason you have "weird characters", and it doesn't affect the performance at all. But notice that our base model currently has no functional MLM head, which makes it more useful for finetuning for sequence of token classification. We are working on releasing these additional heads when our paper gets published in EMNLP findings next month.

If you want to get started earlier with our MLM head, you can always use our fairseq release.