This repository is the official code for the paper "Structure Matters: Tackling the Semantic Discrepancy in Diffusion Models for Image Inpainting" by Haipeng Liu (hpliu_hfut@hotmail.com), Yang Wang (corresponding author: yangwang@hfut.edu.cn), Biao Qian, Meng Wang, Yong Rui. CVPR 2024, Seattle, USA
#
In this paper, we propose a novel structure-guided diffusion model for image inpainting (namely StrDiffusion), which reformulates the conventional texture denoising process under the guidance of the structure to derive a simplified denoising objective (Eq.11) for inpainting, while revealing: 1) the semantically sparse structure is beneficial to tackle the semantic discrepancy in the early stage, while the dense texture generates the reasonable semantics in the late stage; 2) the semantics from the unmasked regions essentially offer the time-dependent guidance for the texture denoising process, benefiting from the time-dependent sparsity of the structure semantics. For the denoising process, a structure-guided neural network is trained to estimate the simplified denoising objective by exploiting the consistency of the denoised structure between masked and unmasked regions. Besides, we devise an adaptive resampling strategy as a formal criterion on whether the structure is competent to guide the texture denoising process, while regulate their semantic correlations.
Figure 1. Illustration of the proposed StrDiffusion pipeline.
Figure 2. Illustration of the adaptive resampling strategy.
In summary, our StrDiffusion reveals:
#
pip install -r requirements.txt
Dataset Preparation:
Download mask and image datasets, then get into the StrDiffusion/train/structure
directory and modify the dataset paths in option files in /config/inpainting/options/train/ir-sde.yml
Run the following command:
Python3 ./train/structure/config/inpainting/train.py
Dataset Preparation:
Download mask and image datasets, then get into the StrDiffusion/train/texture
directory and modify the dataset paths in option files in /config/inpainting/options/train/ir-sde.yml
Run the following command:
Python3 ./train/texture/config/inpainting/train.py
Dataset Preparation:
Download mask and image datasets, then get into the StrDiffusion/train/discriminator
directory and modify the dataset paths in option files in /config/inpainting/options/train/ir-sde.yml
Run the following command:
Python3 ./train/discriminator/config/inpainting/train.py
Dataset Preparation:
Download mask and image datasets, then get into the StrDiffusion/test/texture
directory and modify the dataset paths in option files in /config/inpainting/options/test/ir-sde.yml
Pre-trained models:
Download the pre-trained model of Places2, T=400, PSV, T=100, then get into the StrDiffusion/test/texture
directory and modify the model paths in option files in /config/inpainting/options/test/ir-sde.yml
For different T, you can set the corresponding hyperparameters of adaptive resampling strategy in here
Run the following command:
Python3 ./test/texture/config/inpainting/test.py
#
#
If any part of our paper and repository is helpful to your work, please generously cite with:
@InProceedings{Liu_2024_CVPR,
author = {Liu, Haipeng and Wang, Yang and Qian, Biao and Wang, Meng and Rui, Yong},
title = {Structure Matters: Tackling the Semantic Discrepancy in Diffusion Models for Image Inpainting},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
pages = {8038-8047}
}
This implementation is based on / inspired by: