Image Inpainting With Local and Global Refinement (Paper)
python train.py --dataroot no_use --name celebahq_LGNet --model pix2pixglg --netG1 unet_256 --netG2 resnet_4blocks --netG3 unet256 --netD snpatch --gan_mode lsgan --input_nc 4 --no_dropout --direction AtoB --display_id 0 --gpu_ids 0
python test_and_save.py --dataroot no_use --name celebahq_LGNet --model pix2pixglg --netG1 unet_256 --netG2 resnet_4blocks --netG3 unet256 --gan_mode nogan --input_nc 4 --no_dropout --direction AtoB --gpu_ids 0
We use Places2, CelebA-HQ, and Paris Street-View datasets. Liu et al. provides 12k irregular masks as the testing mask.
You can download the pre-trained model from Celeba-HQ, Places2_20cat. Note that, our pre-trained model on places 2 only uses 20 categories as our paper described. Then put them into the ./checkpoints/celebahq_LGNet/.
If you find this useful for your research, please use the following.
@ARTICLE{9730792,
author={Quan, Weize and Zhang, Ruisong and Zhang, Yong and Li, Zhifeng and Wang, Jue and Yan, Dong-Ming},
journal={IEEE Transactions on Image Processing},
title={Image Inpainting With Local and Global Refinement},
year={2022},
volume={31},
pages={2405-2420}
}
This code borrows from pytorch-CycleGAN-and-pix2pix and RFR.