weizequan / LGNet

42 stars 5 forks source link

LGNet

Image Inpainting With Local and Global Refinement (Paper)

Prerequisites

Run

  1. train the model
    python train.py --dataroot no_use --name celebahq_LGNet --model pix2pixglg --netG1 unet_256 --netG2 resnet_4blocks --netG3 unet256 --netD snpatch --gan_mode lsgan --input_nc 4 --no_dropout --direction AtoB --display_id 0 --gpu_ids 0
  2. test the model
    python test_and_save.py --dataroot no_use --name celebahq_LGNet --model pix2pixglg --netG1 unet_256 --netG2 resnet_4blocks --netG3 unet256 --gan_mode nogan --input_nc 4 --no_dropout --direction AtoB --gpu_ids 0

Download Datasets

We use Places2, CelebA-HQ, and Paris Street-View datasets. Liu et al. provides 12k irregular masks as the testing mask.

Pretrained Models

You can download the pre-trained model from Celeba-HQ, Places2_20cat. Note that, our pre-trained model on places 2 only uses 20 categories as our paper described. Then put them into the ./checkpoints/celebahq_LGNet/.

Citation

If you find this useful for your research, please use the following.

@ARTICLE{9730792,
  author={Quan, Weize and Zhang, Ruisong and Zhang, Yong and Li, Zhifeng and Wang, Jue and Yan, Dong-Ming},
  journal={IEEE Transactions on Image Processing}, 
  title={Image Inpainting With Local and Global Refinement}, 
  year={2022},
  volume={31},
  pages={2405-2420}
}

Acknowledgments

This code borrows from pytorch-CycleGAN-and-pix2pix and RFR.