Lintao Zhang, Xiangcheng Du, LeoWu TomyEnrique, Yiqun Wang, Yingbin Zheng, Cheng Jin
Fudan University, Videt Technology
Clone our repository
git clone https://github.com/linghuyuhangyuan/M2S.git
cd M2S
Make conda environment
conda create -n M2S python=3.8
conda activate M2S
pip install -r requirements.txt
The inputs of Image Inpainting include original images and binary masks.
Image
We conduct experiements on two datasets: CelebA-HQ and ImageNet at 256×256 pixels.
Mask
We use the mask test sets of RePaint, which include 6 types: Wide, Narrow, Half, Expand, Alternating Lines and Super-Resolve 2×. You can download these datasets from their provided Google Drive link.
We employ a pretrained Denoising Diffusion Probabilistic Model (DDPM) as the generative prior. For speeding up, we use a Light-Weight Diffusion Model from P2-weighting, substituting the large-parameter DDPM from guided-diffusion.
Training code can be found in the repository P2-weighting. our trained models of 64×64 resolution for the coarse stage and 256×256 resolution for the refinement stage are accessible in this Google Drive link.
Download pretrained model from Google Drive and place them within the models
directory.
First, set PYTHONPATH variable to point to the root of the repository.
export PYTHONPATH=$PYTHONPATH:$(pwd)
Run demo.
sh run.sh
The visualized outputs will be gererated in results/celebahq/thick
.
The quantified metric results are displayed in results/celebahq/thick/metrics_log.txt
.
If you want to try other images and different mask types, please modify --base_samples
and --mask_path
in run.sh
.
Note: For special mask types: Alternating Lines and Super-Resolve 2×, please ensure to set --special_mask True
in the run.sh
script.
This code is based on the RePaint, P2-weighting and guided-diffusion. Thanks for their awesome works.
If you have any question or suggestion, please contact ltzhang21@m.fudan.edu.cn.