OCR-D wrapper for the Kraken OCR engine
This package offers OCR-D compliant workspace processors for (some of) the functionality of Kraken.
(Each processor is a parameterizable step in a configurable workflow of the OCR-D functional model. There are usually various alternative processor implementations for each step. Data is represented with METS and PAGE.)
It includes image preprocessing (binarization), layout analysis (region and line+baseline segmentation), and text recognition.
This is the best option if you want to run the software in a container.
You need to have Docker
docker pull ocrd/kraken
To run with Docker:
docker run --rm \
-v path/to/workspaces:/data \
-v path/to/models:/usr/local/share/ocrd-resources \
ocrd/kraken ocrd-kraken-recognize ...
# or ocrd-kraken-segment or ocrd-kraken-binarize
This is the best option if you want to use the stable, released version.
pip install ocrd_kraken
Use this option if you want to change the source code or install the latest, unpublished changes.
We strongly recommend to use venv.
git clone https://github.com/OCR-D/ocrd_kraken
cd ocrd_kraken
sudo make deps-ubuntu # or manually from git or via ocrd_all
make deps # or pip install -r requirements.txt
make install # or pip install .
Kraken uses data-driven (neural) models for segmentation and recognition, but comes with no pretrained "official" models.
There is a public repository of community-provided models, which can also
be queried and downloaded from via kraken
standalone CLI.
(See Kraken docs for details.)
For the OCR-D wrapper, since all OCR-D processors must resolve file/data resources in a standardized way, there is a general mechanism for managing models, i.e. installing and using them by name. We currently manage our own list of recommended models (without delegating to the above repo).
Models always use the filename suffix .mlmodel
, but are just loaded by their basename.
See the OCR-D model guide and
ocrd resmgr --help
For details, see docstrings in the individual processors and ocrd-tool.json descriptions,
or simply --help
.
Available OCR-D processors are:
AlternativeImage
files (per page, region or line) to the output fileGrpTextRegion
s to Page
(if level-of-operation=page
) or TableRegion
s (if table
)TextLine
s (with Baseline
) to TextRegion
s (for all level-of-operation
)overwrite_segments
)Baseline
s, falls back to center-normalized bboxes)
Word
s to TextLine
sGlyph
s to Word
sTextEquiv
(removing existing TextEquiv
if overwrite_text
)make test
This downloads test data from https://github.com/OCR-D/assets under repo/assets
, and runs some basic tests of the Python API.
Set PYTEST_ARGS="-s --verbose"
to see log output (-s
) and individual test results (--verbose
).