tscan 0.10 (c) TiCC/ 1998 - 2023
Tilburg centre for Cognition and Communication, Tilburg University.
UiL-OTS, Utrecht University
Language Machines, Centre for Language Studies, Nijmegen
T-Scan is distributed under the GNU Affero Public Licence (see the file COPYING).
T-Scan is an analysis tool for dutch texts to assess the complexity of the text, and is based on original work by Rogier Kraf (Utrecht University) (see: Kraf et al., 2009). The code has been reimplemented and extended by Ko van der Sloot (Tilburg University), Martijn van der Klis (Utrecht University) and is currently maintained and continued by Luka van der Plas and Sheean Spoel (both Utrecht University).
This repository contains the T-Scan source code, allowing you to run it yourself on your own system. In addition, T-Scan is available as a web application and webservice through https://tscan.hum.uu.nl/tscan/. You can create an account or contact us if your institution is not (yet) recognized.
Extensive documentation (in Dutch) can be found in docs/tscanhandleiding.pdf
.
T-Scan heavily depends upon other software, such as Frog, Wopr and Alpino.
Installation is not trivial, to be able to successfully build T-Scan from the tarball, you need the following packages:
We strongly recommend to use Docker to install T-scan. Be aware that T-scan and dependencies are memory intensive, we recommend at least 16GB RAM for proper operation. If WOPR is used (which is enabled by default!) more RAM is required: 32 GB is recommended.
This version of T-Scan can run directly from Docker:
$ docker compose up
Default address: http://localhost:8830
To speed up rebuilds the Dockerfile makes extensive use of caching. The following can be found in docker/data
:
build-cache
: this contains the output of the compiled C++ code, helps speed up a rebuild where the code didn't changecompound-dependencies
: dependencies for the compound splitter, nearly 820 MB which you really don't want to have to download again on every rebuildcompound-dependencies/dist
: the Python package for the compound splitterpackages
: the prebuilt dependencies (Frog, Ucto, etc)Only the build-cache
has automatic invalidation, if you want to update your dependencies you need to delete (parts) of this cache. The cache will be automatically recreated during startup.
If you do not want to use (the provided) dockerfile, first make sure you have all necessary dependencies and then compile/install as follows:
$ bash bootstrap.sh
$ ./configure --prefix=/path/to/installation/
$ make
$ sudo make install
$ cd webservice
$ sudo python3 setup.py install
Before you can use T-Scan you need to start the background servers (you may need to edit the scripts to set ports and paths):
$ cd tscan/webservice
$ ./startalpino.sh
$ ./startfrog.sh
$ ./startwopr20.sh (will start Wopr to calculate forwards probabilities)
$ ./startwopr02.sh (will start Wopr to calculate backwards probabilities)
Then either run T-Scan from the command-line, which will produce a FoLiA XML file,
$ cd tscan
$ cp tscan.cfg.example tscan.cfg
(edit tscan.cfg if necessary)
$ tscan --config=tscan.cfg input.txt
... or use the webapplication/webservice, which you can start with:
$ cd tscan/webservice/tscanservice
$ clamservice tscanservice.tscan #this starts the CLAM service for T-Scan
And then navigate to the host and port specified.
Tests can be run using make check
. This requires running the Frog services:
cd webservice
./startfrog.sh &
Pre-parsed Alpino files are included. It is also possible to remove these and update them for a newer version of Alpino.
./startalpino.sh &
cd ../tests/
rm alpino_lookup.data
rm *.alpino
./testall
./merge_alpino_output.py
Note: the output can change when a different version of Alpino or Frog is used.
Word prevalence values (in data/prevalence_nl.data
and data/prevalence_be.data
) courtesy of Keuleers et al., Center for Reading Research, Ghent University.
Certain parts of T-Scan use data from Referentiebestand Nederlands, which we can not distribute due to restrictive licensing issues, so this functionality will not be available.
Certain other data is too large for GitHub, but will be downloaded for you automatically by the ./downloaddata.sh
script.