TL;DR: We propose DEADiff, a generic method facilitating the synthesis of novel images that embody the style of a given reference image and adhere to text prompts.
Stylized text-to-image results. Resolution: 512 x 512. (Compressed)
conda create -n deadiff python=3.9.2
conda activate deadiff
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install git+https://github.com/salesforce/LAVIS.git@20230801-blip-diffusion-edit
pip install -r requirements.txt
pip install -e .
1) Download the pretrained model from Hugging Face and put it under ./pretrained/. 2) Run the commands in terminal.
python3 scripts/app.py
The Gradio app allows you to transfer style from the reference image. Just try it for more details.
Prompt: "A curly-haired boy"
Prompt: "A robot"
Prompt: "A motorcycle"
We develop this repository for RESEARCH purposes, so it can only be used for personal/research/non-commercial purposes.
@article{qi2024deadiff,
title={DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations},
author={Qi, Tianhao and Fang, Shancheng and Wu, Yanze and Xie, Hongtao and Liu, Jiawei and Chen, Lang and He, Qian and Zhang, Yongdong},
journal={arXiv preprint arXiv:2403.06951},
year={2024}
}
If your have any comments or questions, feel free to contact qth@mail.ustc.edu.cn