omni-us / research-GANwriting

Source code for ECCV20 "GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images"
MIT License
69 stars 24 forks source link
generative-adversarial-network handwriting-synthesis

License: MIT Python 3.7

GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images

A novel method that is able to produce credible handwritten word images by conditioning the generative process with both calligraphic style features and textual content.

Architecture

GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images
Lei Kang, Pau Riba, Yaxing Wang, Marçal Rusiñol, Alicia Fornés, and Mauricio Villegas
Accepted to ECCV2020.

Software environment:

Setup

To install the required dependencies run the following command in the root directory of the project: pip install -r requirements.txt

Dataset preparation

The main experiments are run on IAM since it's a multi-writer dataset. Furthermore, when you have obtained a pretrained model on IAM, you could apply it on other datasets as evaluation, such as GW, RIMES, Esposalles and CVL.

How to train it?

First download the IAM word level dataset, then execute prepare_dataset.sh [folder of iamdb dataset] to prepared the dataset for training.
Afterwards, refer your folder in load_data.py (search img_base).

Then run the training with:

./run_train_scratch.sh

Note: During the training process, two folders will be created: imgs/ contains the intermediate results of one batch (you may like to check the details in function write_image from modules_tro.py), and save_weights/ consists of saved weights ending with .model.

If you have already trained a model, you can use that model for further training by running:

./run_train_pretrain.sh [id]

In this case, [id] should be the id of the model in the save_weights directory, e.g. 1000 if you have a model named contran-1000.model.

How to test it?

We provide two test scripts starting with tt.:

Citation

If you use the code for your research, please cite our paper:

To be updated...