knazeri / edge-connect

EdgeConnect: Structure Guided Image Inpainting using Edge Prediction, ICCV 2019 https://arxiv.org/abs/1901.00212
http://openaccess.thecvf.com/content_ICCVW_2019/html/AIM/Nazeri_EdgeConnect_Structure_Guided_Image_Inpainting_using_Edge_Prediction_ICCVW_2019_paper.html
Other
2.5k stars 528 forks source link

Hello, After reading your paper, may I have a question that why you choice 178 for the celebA dataset drop size. #178

Open FavorMylikes opened 2 years ago

FavorMylikes commented 2 years ago

Here is what the paper describe.

With CelebA, we cropped the center 178x178 of the images, then resized them to 256x256 using bilinear interpolation. For Paris StreetView, since the images in the dataset are elongated (936 x 537), we separate each image into three: 1) Left 537 x 537, 2) middle 537 x 537, 3) right 537 x 537, of the image. These images are scaled down to 256x256 for our model, totaling 44; 700 images.

And after little test, I feel this number has a big impact on the results.

So, maybe you have some experience about it.

Could you share it? I really appreciate it.