nipponjo / deepfillv2-pytorch

A PyTorch reimplementation of the paper Free-Form Image Inpainting with Gated Convolution (DeepFill v2) (https://arxiv.org/abs/1806.03589)
145 stars 34 forks source link

Pretrained models #22

Closed srayan00 closed 1 year ago

srayan00 commented 1 year ago

What data set was used to train the places and celebahq model?

nipponjo commented 1 year ago

Hello, the paper cites: [53] Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analy- sis and Machine Intelligence, 2017. 3, 6 for the Places dataset and [18] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. 3, 6 for the CelebA-HQ dataset.

Therefore, I assume (http://places2.csail.mit.edu/) and (https://www.kaggle.com/datasets/lamsimon/celebahq) were used. I also used these for fine-tuning the weights.