monniert / docExtractor

(ICFHR 2020 oral) Code for "docExtractor: An off-the-shelf historical document element extraction" paper
https://www.tmonnier.com/docExtractor
MIT License
85 stars 10 forks source link
document-analysis historical-data pytorch segmentation

docExtractor

Pytorch implementation of "docExtractor: An off-the-shelf historical document element extraction" paper (accepted at ICFHR 2020 as an oral)

Check out our paper and webpage for details!

teaser.jpg

If you find this code useful, don't forget to star the repo ⭐ and cite the paper:

@inproceedings{monnier2020docExtractor,
  title={{docExtractor: An off-the-shelf historical document element extraction}},
  author={Monnier, Tom and Aubry, Mathieu},
  booktitle={ICFHR},
  year={2020},
}

Installation :construction_worker:

Prerequisites

Make sure you have Anaconda installed (version >= to 4.7.10, you may not be able to install correct dependencies if older). If not, follow the installation instructions provided at https://docs.anaconda.com/anaconda/install/.

1. Create conda environment

conda env create -f environment.yml
conda activate docExtractor

2. Download resources and models

The following command will download:

./download.sh

NB: it may happen that gdown hangs, if so you can download them by hand, then unzip and move them to appropriate folders (see corresponding scripts):

How to use :rocket:

There are several main usages you may be interested in:

  1. perform element extraction (off-the-shelf using our trained network or a fine-tuned one)
  2. build our segmentation method from scratch
  3. fine-tune network on custom datasets

Demo: in the demo folder, we provide a jupyter notebook and its html version detailing a step-by-step pipeline to predict segmentation maps for a given image.

preview.png

1. Element extraction

CUDA_VISIBLE_DEVICES=gpu_id python src/extractor.py --input_dir inp --output_dir out

Main args

Additionals

NB: check src/utils/constant.py for labels mapping
NB: the code will automatically run on CPU if no GPU are provided/found

2. Build our segmentation method from scratch

This would result in a model similar to the one located in models/default/model.pkl.

a) Generate SynDoc dataset

You can skip this step if you have already downloaded SynDoc using the script above.

python src/syndoc_generator.py -d dataset_name -n nb_train --merged_labels

b) Train neural network on SynDoc

CUDA_VISIBLE_DEVICES=gpu_id python src/trainer.py --tag tag --config syndoc.yml

3. Fine-tune network on custom datasets

To fine-tune on a custom dataset, you have to:

  1. annotate a dataset pixel-wise: we recommend using VGG Image Anotator (link) and our ViaJson2Image tool
  2. split dataset in train, val, test and move it to datasets folder
  3. create a configs/example.yml config with corresponding dataset name and a pretrained model name (e.g. default)
  4. train segmentation network with python src/trainer.py --tag tag --config example.yml

Then you can perform extraction with the fine-tuned network by specifying the model tag.

NB: val and test splits can be empty, you just won't have evaluation metrics

Annex tools :hammer_and_wrench:

1. ViaJson2Image - Convert VIA annotations into segmentation map images

python src/via_converter.py --input_dir inp --output_dir out --file via_region_data.json

NB: by default, it converts regions using the illustration label

2. SyntheticLineGenerator - Generate a synthetic line dataset for OCR training

python src/synline_generator.py -d dataset_name -n nb_doc

NB: you may want to remove text translation augmentations by modifying synthetic.element.TEXT_FONT_TYPE_RATIO.

3. IIIFDownloader - Download IIIF images data from manifest urls

python src/iiif_downloader.py -f filename.txt -o output_dir --width W --height H

NB: Be aware that if both width and height arguments are specified, aspect ratio won't be kept. Common usage is to specify a fixed height only.