yizhiwang96 / deepvecfont

[SIGGRAPH Asia 2021] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning
MIT License
182 stars 31 forks source link

DeepVecFont

This is the official Pytorch implementation of the paper:

Yizhi Wang and Zhouhui Lian. DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning. SIGGRAPH Asia. 2021.

Paper: arxiv Supplementary: Link Homepage: DeepVecFont

Updates

2022-07-03

Input glyphs:

Synthesized glyphs by DeepVecFont:



Input glyphs:

Synthesized glyphs by DeepVecFont:



Input glyphs:

Synthesized glyphs by DeepVecFont:



Vector font interpolation

Rendered:






Outlines:






Random generation (New fonts)

Rendered:



Installation

Requirement

Please use Anaconda to build the environment:

conda create -n dvf python=3.9
source activate dvf

Install pytorch via the instructions.

Install diffvg

We utilize diffvg to refine our generated vector glyphs in the testing phase. Please go to https://github.com/BachiLi/diffvg see how to install it.

Important (updated 2021.10.19): You need first replace the original diffvg/pydiffvg/save_svg.py with this and then install.

Data and Pretrained-model

Dataset

The Vector Font dataset

(This dataset is a subset from SVG-VAE, ICCV 2019.) There are two modes to access the dataset:

The Image Super-resolution dataset

This dataset is too huge so I suggest to create it by yourself (see the details below about creating your own dataset).

Pretrained models

The Neural Rasterizer

Download link: image size 64x64, image size 128x128. Please download the dvf_neural_raster dir and put it under ./experiments/.

The Image Super-resolution model

Download link: image size 64x64 -> 256x256, image size 128x128 -> 256x256. Please download the image_sr dir and put it under ./experiments/.

The Main model

Download link: image size 64x64, image size 128x128. Please download the dvf_main_model dir and put it under ./experiments/.

Note that recently we switched from Tensorflow to Pytorch, we may update the models that have better performances.

Training and Testing

To train our main model, run

python main.py --mode train --experiment_name dvf --model_name main_model

The configurations can be found in options.py.

To test our main model, run

python test_sf.py --mode test --experiment_name dvf --model_name main_model --test_epoch 1200 --batch_size 1 --mix_temperature 0.0001 --gauss_temperature 0.01

This will output the synthesized fonts without refinements. Note that batch_size must be set to 1. The results will be written in ./experiments/dvf_main_model/results/.

To refinement the vector glyphs, run

python refinement_mp.py --experiment_name dvf --fontid 14 --candidate_nums 20 --num_processes 4

where the fontid denotes the index of testing font. The results will be written in ./experiments/dvf_main_model/results/0014/svgs_refined/. Set num_processes according to your GPU's computation capacity. Setting init_svgbbox_align2img to True could give better results when the initial svg and raster image don't align well.

We have pretrained the neural rasterizer and image super-resolution model. If you want to train them yourself:

To train the neural rasterizer:

python train_nr.py --mode train --experiment_name dvf --model_name neural_raster

To train the image super-resolution model:

python train_sr.py --mode train --name image_sr

Customize your own dataset

Put the ttf/otf files in ./data_utils/font_ttfs/train and ./data_utils/font_ttfs/test, and organize them as 0000.ttf, 0001.ttf, 0002.ttf... The ttf/otf files in our dataset can be found in Google Drive.

for python > 3.0:

conda deactivate
apt install python3-fontforge

It works in Ubuntu 20.04.1, other lower versions may fail in import fontforge.

dirs (recommended):

python write_data_to_dirs.py --split train
python write_data_to_dirs.py --split test

pkl

python write_data_to_pkl.py --split train
python write_data_to_pkl.py --split test

Note:

(1) If you use the mean and stddev files calculated from your own data, you need to retrain the neural rasterizer. For English fonts, just use the mean and stddev files we provided.

Acknowledgment

Citation

If you use this code or find our work is helpful, please consider citing our work:

@article{wang2021deepvecfont,
 author = {Wang, Yizhi and Lian, Zhouhui},
 title = {DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning},
 journal = {ACM Transactions on Graphics},
 numpages = {15},
 volume={40},
 number={6},
 month = December,
 year = {2021},
 doi={10.1145/3478513.3480488}
}