A novel method that is able to produce credible handwritten word images by conditioning the generative process with both calligraphic style features and textual content.
GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images
Lei Kang, Pau Riba, Yaxing Wang, Marçal Rusiñol, Alicia Fornés, and Mauricio Villegas
Accepted to ECCV2020.
To install the required dependencies run the following command in the root directory of the project:
pip install -r requirements.txt
The main experiments are run on IAM since it's a multi-writer dataset. Furthermore, when you have obtained a pretrained model on IAM, you could apply it on other datasets as evaluation, such as GW, RIMES, Esposalles and CVL.
First download the IAM word level dataset, then execute prepare_dataset.sh [folder of iamdb dataset]
to prepared the dataset for training.
Afterwards, refer your folder in load_data.py
(search img_base
).
Then run the training with:
./run_train_scratch.sh
Note: During the training process, two folders will be created:
imgs/
contains the intermediate results of one batch (you may like to check the details in function write_image
from modules_tro.py
), and save_weights/
consists of saved weights ending with .model
.
If you have already trained a model, you can use that model for further training by running:
./run_train_pretrain.sh [id]
In this case, [id]
should be the id of the model in the save_weights
directory, e.g. 1000 if you have a model named contran-1000.model
.
We provide two test scripts starting with tt.
:
tt.test_single_writer.4_scenarios.py
: Please refer to Figure 4 of our paper to check the details. At the beginning of this code file, you need to open the comments in turns to run 4 scenarios experiments one by one.
tt.word_ladder.py
: Please refer to Figure 7 of our paper to check the details. It's fun:-P
If you use the code for your research, please cite our paper:
To be updated...