🚀New in version 1.1.0: support for multilabel and regression. See the examples🚀
The BERT model can process texts of the maximal length of 512 tokens (roughly speaking tokens are equivalent to words). It is a consequence of the model architecture and cannot be directly adjusted. Discussion of this issue can be found here. Method to overcome this issue was proposed by Devlin (one of the authors of BERT) in the previously mentioned discussion: comment. The main goal of our project is to implement this method and allow the BERT model to process longer texts during prediction and fine-tuning. We dub this approach BELT (BERT For Longer Texts).
More technical details are described in the documentation. We also prepared the comprehensive blog post: part 1, part 2.
The limitations of the BERT model to the 512 tokens come from the very beginning of the transformers models. Indeed, the attention mechanism, invented in the groundbreaking 2017 paper Attention is all you need, scales quadratically with the sequence length. Unlike RNN or CNN models, which can process sequences of arbitrary length, transformers with the full attention (like BERT) are infeasible (or very expensive) to process long sequences. To overcome the issue, alternative approaches with sparse attention mechanisms were proposed in 2020: BigBird and Longformer.
Let us now clarify the key differences between the BELT approach to fine-tuning and the sparse attention models BigBird and Longformer:
roberta_for_longer_texts
. We encourage more research in this direction.The project requires Python 3.9+ to run. We recommend training the models on the GPU. Hence, it is necessary to install torch
version compatible with the machine. The version of the driver depends on the machine - first, check the version of GPU drivers by the command nvidia-smi
and choose the newest version compatible with these drivers according to this table (e.g.: 11.1). Then we install torch
to get the compatible build. Here, we find which torch version is compatible with the CUDA version on our machine.
Another option is to use the CPU-only version of torch:
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
Next, we recommend installing via pip:
pip3 install belt-nlp
If you want to clone the repo in order to run tests or notebooks, you can use the requirements.txt
file.
Two main classes are implemented:
BertClassifierTruncated
- base binary classification model, longer texts are truncated to 512 tokensBertClassifierWithPooling
- extended model for longer texts (more details in the documentation)The main methods are:
fit
- fine-tune the model to the training set, use the list of raw texts and labelspredict_classes
- calculate the list of classifications for the given list of raw texts. The model must be fine-tuned before that.predict_scores
- calculate the list of probabilities for the given list of raw texts. The model must be fine-tuned before that.As a default, the standard English bert-base-uncased
model is used as a pre-trained model. However, it is possible to use any Bert or Roberta model. To do this, use the parameter pretrained_model_name_or_path
.
It can be either:
roberta-base
../my_model_directory/
.To make sure everything works properly, run the command pytest tests -rA
. As a default, during tests, models are trained on small samples on the CPU.
All examples use public datasets from huggingface hub.
The project was created at MIM AI by:
If you want to contribute to the library, see the contributing info.
See CHANGELOG.md.
See the LICENSE file for license rights and limitations (MIT).
File requirements.txt
can be updated using the command:
bash pip-freeze-without-torch.sh > requirements.txt
This script saves all dependencies of the current active environment except torch
.
In order to add the next version of the package to pypi, do the following steps:
pyproject.toml
.python3.9 -m build
from the main folder.twine upload dist/*
(two newly created files).