dbmdz / berts

DBMDZ BERT, DistilBERT, ELECTRA, GPT-2 and ConvBERT models
MIT License
155 stars 12 forks source link
bert bert-model electra german transformers

πŸ€— + πŸ“š dbmdz BERT models

In this repository the MDZ Digital Library team (dbmdz) at the Bavarian State Library open sources another BERT models πŸŽ‰

Changelog

German BERT

Stats

In addition to the recently released German BERT model by deepset we provide another German-language model.

The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus, Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset with a size of 16GB and 2,350,234,427 tokens.

For sentence splitting, we use spacy. Our preprocessing steps (sentence piece model for vocab generation) follow those used for training SciBERT. The model is trained with an initial sequence length of 512 subwords and was performed for 1.5M steps.

This release includes both cased and uncased models.

Model weights

Currently only PyTorch-Transformers compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue!

Model Downloads
bert-base-german-dbmdz-cased config.json β€’ pytorch_model.bin β€’ vocab.txt
bert-base-german-dbmdz-uncased config.json β€’ pytorch_model.bin β€’ vocab.txt

Usage

With Transformers >= 2.3 our German BERT models can be loaded like:

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-german-cased")

Results

For results on downstream tasks like NER or PoS tagging, please refer to this repository.

Italian BERT

The source data for the Italian BERT model consists of a recent Wikipedia dump and various texts from the OPUS corpora collection. The final training corpus has a size of 13GB and 2,050,057,573 tokens.

For sentence splitting, we use NLTK (faster compared to spacy). Our cased and uncased models are training with an initial sequence length of 512 subwords for ~2-3M steps.

For the XXL Italian models, we use the same training data from OPUS and extend it with data from the Italian part of the OSCAR corpus. Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.

Note: Unfortunately, a wrong vocab size was used when training the XXL models. This explains the mismatch of the "real" vocab size of 31102, compared to the vocab size specified in config.json. However, the model is working and all evaluations were done under those circumstances. See this issue for more information.

The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batch size of 128. We pretty much following the ELECTRA training procedure as used for BERTurk.

Model weights

Currently only PyTorch-Transformers compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue!

Model Downloads
dbmdz/bert-base-italian-cased config.json β€’ pytorch_model.bin β€’ vocab.txt
dbmdz/bert-base-italian-uncased config.json β€’ pytorch_model.bin β€’ vocab.txt
dbmdz/bert-base-italian-xxl-cased config.json β€’ pytorch_model.bin β€’ vocab.txt
dbmdz/bert-base-italian-xxl-uncased config.json β€’ pytorch_model.bin β€’ vocab.txt
dbmdz/electra-base-italian-xxl-cased-discriminator config.json β€’ pytorch_model.bin β€’ vocab.txt
dbmdz/electra-base-italian-xxl-cased-generator config.json β€’ pytorch_model.bin β€’ vocab.txt

Results

For results on downstream tasks like NER or PoS tagging, please refer to this repository.

Usage

With Transformers >= 2.3 our Italian BERT models can be loaded like:

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-italian-cased")

To load the (recommended) Italian XXL BERT models, just use:

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-italian-xxl-cased")

German Europeana BERT, DistilBERT, ELECTRA and ConvBERT

We use the open source Europeana newspapers that were provided by The European Library. The final training corpus has a size of 51GB and consists of 8,035,986,369 tokens.

Detailed information about the data and pretraining steps can be found in this repository.

Model weights

The following models are available from the Hugging Face model hub:

Model Downloads
dbmdz/bert-base-german-europeana-cased See model hub
dbmdz/bert-base-german-europeana-uncased See model hub
dbmdz/electra-base-german-europeana-cased-discriminator See model hub
dbmdz/electra-base-german-europeana-cased-generator See model hub
dbmdz/convbert-base-german-europeana-cased See model hub
dbmdz/distilbert-base-german-europeana-cased See model hub

Results

For results on Historic NER, please refer to this repository.

Usage

With Transformers >= 2.3 our German Europeana BERT models can be loaded like:

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-europeana-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-german-europeana-cased")

The German Europeana BERT uncased model can be loaded like:

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-europeana-uncased")
model = AutoModel.from_pretrained("dbmdz/bert-base-german-europeana-uncased")

French Europeana BERT and ELECTRA

We use the open source Europeana newspapers that were provided by The European Library. The final training corpus has a size of 63GB and consists of 11,052,528,456 tokens.

Detailed information about the data and pretraining steps can be found in this repository.

Model weights

Model Downloads
dbmdz/bert-base-french-europeana-cased See model hub
dbmdz/electra-base-french-europeana-cased-discriminator See model hub
dbmdz/electra-base-french-europeana-cased-generator See model hub

Usage

With Transformers >= 2.3 our French Europeana BERT and ELECTRA models can be loaded like:

from transformers import AutoModel, AutoTokenizer

model_name = "dbmdz/bert-base-french-europeana-cased"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)

The ELECTRA (discriminator) model can be used with:

from transformers import AutoModel, AutoTokenizer

model_name = "dbmdz/electra-base-french-europeana-cased-discriminator"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)

Turkish BERT: BERTurk, DistilBERTurk, ELECTRA and ConvBERTurk

BERTurk are community-driven cased models for Turkish.

Some datasets used for pretraining and evaluation are contributed from the awesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.

The final training corpus has a size of 35GB and 44,04,976,662 tokens.

Detailed information about the data and pretraining steps can be found in this repository.

Additionally, we trained a distilled version of BERTurk: DistilBERTurk, that uses knowledge-distillation from BERTurk (teacher model). More information on distillation can be found in the excellent "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter" paper by Sanh et al. (2019).

Furthermore, we provide cased and uncased models trained with a larger vocab size (128k instead of 32k).

We also trained small and base ELECTRA models. ELECTRA is a new method for self-supervised language representation learning. More details about ELECTRA can be found in the ICLR paper.

In addition to the BERT and ELECTRA based models, we also trained a ConvBERT model. The ConvBERT architecture is presented in the "ConvBERT: Improving BERT with Span-based Dynamic Convolution" paper.

Evaluation of our models can be found in this repository.

We've also trained an ELECTRA (cased) model on the recently released Turkish part of the multiligual C4 (mC4) corpus from the AI2 team.

After filtering documents with a broken encoding, the training corpus has a size of 242GB resulting in 31,240,963,926 tokens.

Model weights

All trained models can be used from the DBMDZ Hugging Face model hub page using their model name. The following models are available:

Results

For results on PoS tagging or NER tasks, please refer to this repository.

Usage

With Transformers >= 2.3 our BERTurk cased model can be loaded like:

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-cased")
model = AutoModel.from_pretrained("dbmdz/bert-base-turkish-cased")

The DistilBERTurk model can be loaded with:

from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("dbmdz/distilbert-base-turkish-cased")
model = AutoModel.from_pretrained("dbmdz/distilbert-base-turkish-cased")

Our ELECTRA models can be used with Transformers >= 2.8 and can be loaded with:

from transformers import AutoModelWithLMHead, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator")
model = AutoModelWithLMHead.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator")

and

from transformers import AutoModelWithLMHead, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-discriminator")
model = AutoModelWithLMHead.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-discriminator")

Our ConvBERT model can be used with Transformers >= 4.3 and can be loaded with:

from transformers import AutoModelWithLMHead, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("dbmdz/convbert-base-turkish-cased")
model = AutoModelWithLMHead.from_pretrained("dbmdz/convbert-base-turkish-cased")

Ukrainian ELECTRA

The source data for the Ukrainian ELECTRA model consists of two corpora:

The final training corpus has a size of 30GB and consits of exactly 2,402,761,324 tokens.

Detailed information about the data and pretraining steps can be found in this repository.

Model weights

Currently only PyTorch-Transformers compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue!

Model Downloads
dbmdz/electra-base-ukrainian-cased-discriminator See model hub
dbmdz/electra-base-ukrainian-cased-generator See model hub

Results

For results on PoS tagging and NER downstream tasks, please refer to this repository.

Usage

With Transformers >= 2.3 our Ukrainian ELECTRA model can be loaded like:

from transformers import AutoModel, AutoTokenizer

model_name = "dbmdz/electra-base-ukrainian-cased-discriminator"

tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelWithLMHead.from_pretrained(model_name)

German GPT-2 model

The German GPT-2 model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model.

For training we use pretty much the same corpora as used for training the DBMDZ BERT model. We created a 50K byte-level BPE vocab based on the training corpora.

The model was trained on one v3-8 TPU over the whole training corpus for 20 epochs.

Detailed information can be found in this repository.

Note: we have released a re-trained version of this model with better results!

Model weights

In addition to the German GPT-2 model, we release a GPT-2 model, that was fine-tuned on a normalized version of Faust I and II.

Model Downloads
dbmdz/german-gpt2 See model hub
dbmdz/german-gpt2-faust (old model) See model hub

Usage

With Transformers >= 2.3 our German GPT-2 model can be used for text generation:

from transformers import pipeline

pipe = pipeline('text-generation', model="dbmdz/german-gpt2",
                 tokenizer="dbmdz/german-gpt2", config={'max_length':800})

text = pipe2("Der Sinn des Lebens ist es")[0]["generated_text"]

print(text)

Historic Language Models

We release several BERT-based language models, incl. a multilingual Historic language models that includes German, French, English, Finnish and Swedish, as well monolingual Historic language models for English, Finnish and Swedish. The multilingual Historic language model was trained on 130GB of texts, extracted from Europeana Newspapers and British Library corpus.

More details about our Historic Language Models can be found in this repository.

Model weights

All models are available on the Hugging Face model hub:

Model identifier Model Hub link
dbmdz/bert-base-historic-multilingual-cased here
dbmdz/bert-base-historic-english-cased here
dbmdz/bert-base-finnish-europeana-cased here
dbmdz/bert-base-swedish-europeana-cased here

We also released smaller Historic Language Models:

Model identifier Model Hub link
dbmdz/bert-tiny-historic-multilingual-cased here
dbmdz/bert-mini-historic-multilingual-cased here
dbmdz/bert-small-historic-multilingual-cased here
dbmdz/bert-medium-historic-multilingual-cased here

Historic Dutch

We train a language model on the Delpher Corpus, that includes digitized texts from Dutch newspapers, ranging from 1618 to 1879.

The total training corpus consists of 427,181,269 sentences and 3,509,581,683 tokens (counted via wc), resulting in a total corpus size of 21GB.

More details about the Historic Dutch language model can be found in this repository.

Model weights

The following models for Historic Dutch are available on the Hugging Face Model Hub:

Model identifier Model Hub link
dbmdz/bert-base-historic-dutch-cased here

License

All models are licensed under MIT.

Huggingface model hub

All models are available on the Huggingface model hub.

Papers

Here you can find a list papers, that used one of our trained models. Feel free to open a PR/issue if you want your paper to be included!

Contact (Bugs, Feedback, Contribution and more)

For questions about our BERT models just open an issue here πŸ€—

Acknowledgments

Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❀️

Thanks to the generous support from the Hugging Face team, it is possible to download both cased and uncased models from their S3 storage πŸ€—