Wietse de Vries • Andreas van Cranenburgh • Arianna Bisazza • Tommaso Caselli • Gertjan van Noord • Malvina Nissim
BERTje is a Dutch pre-trained BERT model developed at the University of Groningen.
For details, check out our paper on arXiv, the model on the 🤗 Hugging Face model hub and related work on Semantic Scholar.
You can play with BERTje without any training with the following snippet (or use the hosted version by Huggingface):
from transformers import pipeline
pipe = pipeline('fill-mask', model='GroNLP/bert-base-dutch-cased')
for res in pipe('Ik wou dat ik een [MASK] was.'):
print(res['sequence'])
# [CLS] Ik wou dat ik een kind was. [SEP]
# [CLS] Ik wou dat ik een mens was. [SEP]
# [CLS] Ik wou dat ik een vrouw was. [SEP]
# [CLS] Ik wou dat ik een man was. [SEP]
# [CLS] Ik wou dat ik een vriend was. [SEP]
If you want to actually train your own model based on BERTje, you can load the tokenizer and model with this snippet:
from transformers import AutoTokenizer, AutoModel, TFAutoModel
tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased")
model = AutoModel.from_pretrained("GroNLP/bert-base-dutch-cased") # PyTorch
model = TFAutoModel.from_pretrained("GroNLP/bert-base-dutch-cased") # Tensorflow
That's all! Check out the Transformers documentation for further instructions.
WARNING: The vocabulary size of BERTje has changed in 2021. If you use an older fine-tuned model and experience problems with the GroNLP/bert-base-dutch-cased
tokenizer, use use the following tokenizer:
tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased", revision="v1") # v1 is the old vocabulary
The arXiv paper lists benchmarks. Here are a couple of comparisons between BERTje, multilingual BERT, BERT-NL and RobBERT that were done after writing the paper. Unlike some other comparisons, the fine-tuning procedures for these benchmarks are identical for each pre-trained model. You may be able to achieve higher scores for individual models by optimizing fine-tuning procedures.
More experimental results will be added to this page when they are finished. Technical details about how a fine-tuned these models will be published later as well as downloadable fine-tuned checkpoints.
All of the tested models are base sized (12) layers with cased tokenization.
Headers in the tables below link to original data sources. Scores link to the model pages that corresponds to that specific fine-tuned model. These tables will be updated when more simple fine-tuned models are made available.
Model | CoNLL-2002 | SoNaR-1 | spaCy UD LassySmall |
---|---|---|---|
BERTje | 90.24 | 84.93 | 86.10 |
mBERT | 88.61 | 84.19 | 86.77 |
BERT-NL | 85.05 | 80.45 | 81.62 |
RobBERT | 84.72 | 81.98 | 79.84 |
Model | UDv2.5 LassySmall |
---|---|
BERTje | 96.48 |
mBERT | 96.20 |
BERT-NL | 96.10 |
RobBERT | 95.91 |
The recommended download method is using the Transformers library. The model is available at the model hub.
You can manually download the model files here: https://huggingface.co/GroNLP/bert-base-dutch-cased/tree/main
Thanks to Hugging Face for hosting the model files!
The main code that is used for pretraining data preparation, finetuning and probing are given in the appropriate directies. Do not expect the code to be fully functional, complete or documented since this is research code that has been written and collected in the course of multiple months. Nevertheless, the code can be useful for reference.
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
Please use the following citation if you use BERTje or our fine-tuned models:
@misc{devries2019bertje,
title = {{BERTje}: {A} {Dutch} {BERT} {Model}},
shorttitle = {{BERTje}},
author = {de Vries, Wietse and van Cranenburgh, Andreas and Bisazza, Arianna and Caselli, Tommaso and Noord, Gertjan van and Nissim, Malvina},
year = {2019},
month = dec,
howpublished = {arXiv:1912.09582},
url = {http://arxiv.org/abs/1912.09582},
}
Use the following citation if you use anything from the probing classifiers:
@inproceedings{devries2020bertlayers,
title = {What's so special about {BERT}'s layers? {A} closer look at the {NLP} pipeline in monolingual and multilingual models},
author = {de Vries, Wietse and van Cranenburgh, Andreas and Nissim, Malvina},
year = {2020},
booktitle = {Findings of EMNLP},
pages = {4339--4350},
url = {https://www.aclweb.org/anthology/2020.findings-emnlp.389},
}