YangLing0818 / SGDiff

Official implementation for "Diffusion-Based Scene Graph to Image Generation with Masked Contrastive Pre-Training" https://arxiv.org/abs/2211.11138
51 stars 6 forks source link
diffusion-models scene-graph text-to-image

Diffusion-Based Scene Graph to Image Generation with Masked Contrastive Pre-Training

Official Implementation for Diffusion-Based Scene Graph to Image Generation with Masked Contrastive Pre-Training.

Overview of The Proposed SGDiff

image

Environment

git clone https://github.com/YangLing0818/SGDiff.git
cd SGDiff

conda env create -f sgdiff.yaml
conda activate sgdiff
mkdir pretrained

Data and Model Preparation

The instructions of data pre-processing can be found here.

Our masked contrastive pre-trained models of SG-image pairs for COCO and VG datasets are provided in here, please download them and put them in the 'pretrained' directory.

And the pretrained VQVAE for embedding image to latent can be obtained from https://ommer-lab.com/files/latent-diffusion/vq-f8.zip

Masked Contrastive Pre-Training

The instructions of SG-image pretraining can be found in the folder "sg_image_pretraining/"

Diffusion Training

Kindly note that one should not skip the training stage and test directly. For single gpu, one can use

python trainer.py --base CONFIG_PATH -t --gpus 0,

NOT OFFICIAL: Alternatively, if you don't want to train the model from scratch you can download trained weights from the following link: sgdiff_epoch_335.ckpt

Checkpoint trained for 335 epochs, FID=23.54 and IS=18.02 on the test set of Visual Genome.

Sampling

python testset_ddim_sampler.py

Citation

If you found the codes are useful, please cite our paper

@article{yang2022diffusionsg,
  title={Diffusion-based scene graph to image generation with masked contrastive pre-training},
  author={Yang, Ling and Huang, Zhilin and Song, Yang and Hong, Shenda and Li, Guohao and Zhang, Wentao and Cui, Bin and Ghanem, Bernard and Yang, Ming-Hsuan},
  journal={arXiv preprint arXiv:2211.11138},
  year={2022}
}