OPPO-Mente-Lab / attention-mask-control

code for paper "Compositional Text-to-Image Synthesis with Attention Map Control of Diffusion Models"
MIT License
37 stars 5 forks source link

Code for paper: "Compositional Text-to-Image Synthesis with Attention Map Control of Diffusion Models"

[Projext Page][Paper]

Requirements

A suitable conda environment named AMC can be created and activated with:

conda env create -f environment.yaml
conda activate AMC

Data Prepearing

First, please download the coco dataset from here. We use COCO2014 in the paper. Then, you can process your data with this script:

python coco_preprocess.py \
    --coco_image_path /YOUR/COCO/PATH/train2014 \
    --coco_caption_file /YOUR/COCO/PATH/annotations/captions_train2014.json \
    --coco_instance_file /YOUR/COCO/PATH/annotations/instances_train2014.json \
    --output_dir /YOUR/DATA/PATH

Training

Before training, you need to change configs in train_boxnet.sh

You can train the BoxNet through this script:

sh train_boxnet.sh $NODE_NUM $CURRENT_NODE_RANK $GPUS_PER_NODE

Text-to-Image Synthesis

With a trained BoxNet, you can start the Text-to-Image Synthesis with:

python test_pipeline_onestage.py \
    --stable_model_path /stable-diffusion-v1-5/checkpoint
    --boxnet_model_path /TRAINED/BOXNET/CKPT
    --output_dir /YOUR/SAVE/DIR

all the test prompt is saved in file test_prompts.json.

TODOs

Acknowledgements

This implementation is based on the repo from the diffusers library. Fengshenbang-LM codebase. DETR codebase.