This repository is the code of our NeurIPS'23 paper "DiffUTE: Universal Text Editing Diffusion Model". Unfortunately, pre-trained models are not allowed to be made public due to the lisence of AntGroup. You can easily reproduce our method using diffusers and transformers.
The codebases are built on top of diffusers. Thanks very much.
Prepare datasets. Due to data sensitivity issues, our data will not be publicly available now, you can reproduce it on your own data, and all images with text are available for model training. Because our data is present on Ali-Yun oss, we have chosen pcache to read the data we have stored. You can change the data reading method according to the way you store the data.
Train VAE
Train DiffUTE
If you use DiffUTE in your research or wish to refer to the baseline results published here, please use the following BibTeX entry.
@inproceedings{DiffUTE,
title={DiffUTE: Universal Text Editing Diffusion Model},
author={Chen, Haoxing and Xu, Zhuoer and Gu, Zhangxuan and Lan, Jun and Zheng, Xing and Li, Yaohui and Meng, Changhua and Zhu, Huijia and Wang, Weiqiang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS)},
year={2023}
}
Please feel free to contact us if you have any problems.
Email: hx.chen@hotmail.com or zhuoerxu.xzr@antgroup.com