Lingjun Zhang, [Xinyuan Chen](https://scholar.google.com/citations?user=3fWSC8YAAAAJ&hl=zh-CN), Yaohui Wang, Yue Lu and Yu Qiao (* indicates equal contribution)
This is the official PyTorch implementation of the AAAI 2024 paper "Brush Your Text: Synthesize Any Scene Text on Images via Diffusion Model".
git clone https://github.com/ecnuljzhang/brush-your-text.git
cd brush-your-text
conda create -n difftext python=3.8.5
conda activate difftext
pip install -r requirements.txt
The pre-trained checkpoints can be downloaded from Hugging Face. Please download "runwayml/stable-diffusion-v1-5" and "lllyasviel/sd-controlnet-canny" models and put them into the "checkpoints" folder. The file structures could be as follows:
Diff-text
|--- checkpoints
| |--- stable-diffusion-v1-5/
| |--- sd-controlnet-canny/
|--- ...
You may prepare the sketch image yourself (as shown in the first row of the image below) along with the corresponding bounding box text file (please use the ICDAR2013 or ICDAR2015 dataset annotation format), or utilize our sketch image synthesis code. The synthesis code can be executed with the following command:
python controlnet_util/synthtext.py
Before running the synthesis code, please make the necessary modifications to the "controlnet_util/Textgen/text_cfg.yaml" file, following the instructions provided in the file comments.
You can create a text file containing prompts, with the file extension ".txt".
Remember to make the necessary modifications to the "configs/control_gen.yaml", following the instructions provided in the file comments.
python predict.py
If you find this code useful in your research, please consider citing:
@article{zhang2023brush,
title={Brush Your Text: Synthesize Any Scene Text on Images via Diffusion Model},
author={Lingjun Zhang, Xinyuan Chen, Yaohui Wang, Yue Lu, Yu Qiao},
year={2023},
eprint={2312.12232},
archivePrefix={arXiv},
primaryClass={cs.CV}
}