DaNLP is a repository for Natural Language Processing resources for the Danish Language. It is a collection of available datasets and models for a variety of NLP tasks. The aim is to make it easier and more applicable to practitioners in the industry to use Danish NLP and hence this project is licensed to allow commercial use. The project features code examples on how to use the datasets and models in popular NLP frameworks such as spaCy, Transformers and Flair as well as Deep Learning frameworks such as PyTorch. See our documentation pages for more details about our models and datasets, and definitions of the modules provided through the DaNLP package.
If you are new to NLP or want to know more about the project in a broader perspective, you can start on our microsite.
Help us improve DaNLP
News
Next up
To get started using DaNLP in your python project simply install the pip package. Note that installing the default pip package will not install all NLP libraries because we want you to have the freedom to limit the dependency on what you use. Instead we provide you with an installation option if you want to install all the required dependencies.
To get started using DaNLP simply install the project with pip:
pip install danlp
Note that the default installation of DaNLP does not install other NLP libraries such as Gensim, SpaCy, flair or Transformers.
This allows the installation to be as minimal as possible and let the user choose to e.g. load word embeddings with either spaCy, flair or Gensim. Therefore, depending on the function you need to use, you should install one or several of the following: pip install flair
, pip install spacy
or/and pip install gensim
.
Alternatively if you want to install all the required dependencies including the packages mentionned above, you can do:
pip install danlp[all]
You can check the requirements.txt
file to see what version the packages has been tested with.
If you want to be able to use the latest developments before they are released in a new pip package, or you want to modify the code yourself, then clone this repo and install from source.
git clone https://github.com/alexandrainst/danlp.git
cd danlp
# minimum installation
pip install .
# or install all the packages
pip install .[all]
To install the dependencies used in the package with the tested versions:
pip install -r requirements.txt
Alternatively you can install the latest version from github using:
pip install git+https://github.com/alexandrainst/danlp.git
To quickly get started with DaNLP and to try out the models you can use our Docker image. To start a ipython session simply run:
docker run -it --rm alexandrainst/danlp ipython
If you want to run a <script.py>
in your current working directory you can run:
docker run -it --rm -v "$PWD":/usr/src/app -w /usr/src/app alexandrainst/danlp python <script.py>
Read more in our documentation pages.
Natural Language Processing is an active area of research and it consists of many different tasks. The DaNLP repository provides an overview of Danish models for some of the most common NLP tasks (and is continuously evolving).
Here is the list of NLP tasks we currently cover in the repository.
You can also find some of our transformers models on HuggingFace.
If you are interested in Danish support for any specific NLP task you are welcome to get in contact with us.
We also recommend to check out the list of Danish NLP corpora/tools/models maintained by Finn Årup Nielsen (Warning: not all items are available for commercial use, check the licence).
The number of datasets in the Danish language is limited. The DaNLP repository provides an overview of the available Danish datasets that can be used for commercial purposes.
The DaNLP package allows you to download and preprocess datasets.
You will find examples that shows how to use NLP in Danish (using our models or others) in our benchmark scripts and jupyter notebook tutorials.
This project keeps a Danish written blog on medium where we write about Danish NLP, and in time we will also provide some real cases of how NLP is applied in Danish companies.
To help you navigate we provide you with an overview of the structure in the github:
.
├── danlp # Source files
│ ├── datasets # Code to load datasets with different frameworks
│ └── models # Code to load models with different frameworks
├── docker # Docker image
├── docs # Documentation and files for setting up Read The Docs
│ ├── docs # Documentation for tasks, datasets and frameworks
│ ├── tasks # Documentation for nlp tasks with benchmark results
│ ├── frameworks # Overview over different frameworks used
│ ├── gettingstarted # Guides for installation and getting started
│ └── imgs # Images used in documentation
│ └── library # Files used for Read the Docs
├── examples # Examples, tutorials and benchmark scripts
│ ├── benchmarks # Scripts for reproducing benchmarks results
│ └── tutorials # Jupyter notebook tutorials
└── tests # Tests for continuous integration with Travis
If you want to contribute to the DaNLP repository and make it better, your help is very welcome. You can contribute to the project in many ways:
The DaNLP repository is maintained by the Alexandra Institute which is a Danish non-profit company with a mission to create value, growth and welfare in society. The Alexandra Institute is a member of GTS, a network of independent Danish research and technology organisations.
Between 2019 and 2020, the work on this repository was part of the Dansk For Alle performance contract (RK) allocated to the Alexandra Institute by the Danish Ministry of Higher Education and Science. Since 2021, the project is funded through the Dansk NLP activity plan which is part of the Digital sikkerhed, tillid og dataetik performance contract.
An overview of the project can be found on our microsite.
If you want to cite this project, please use the following BibTeX entry:
@inproceedings{danlp2021,
title = {{DaNLP}: An open-source toolkit for Danish Natural Language Processing},
author = {Brogaard Pauli, Amalie and
Barrett, Maria and
Lacroix, Ophélie and
Hvingelby, Rasmus},
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa 2021)},
month = june,
year = "2021"
}
Read the paper here.
See our documentation pages for references to specific models or datasets.