knazeri / edge-connect

EdgeConnect: Structure Guided Image Inpainting using Edge Prediction, ICCV 2019 https://arxiv.org/abs/1901.00212
http://openaccess.thecvf.com/content_ICCVW_2019/html/AIM/Nazeri_EdgeConnect_Structure_Guided_Image_Inpainting_using_Edge_Prediction_ICCVW_2019_paper.html
Other
2.52k stars 532 forks source link

how to get the trainset on places2? #50

Open ljjcoder opened 5 years ago

ljjcoder commented 5 years ago

Thank you for your excellent work. In your paper, you introduced how to get 256x256 images from original images on celebA and Paris StreetView. but for places2, how do you get the 256x256 images?

knazeri commented 5 years ago

@ljjcoder If you already have downloaded the high-resolution Places2 dataset, you can set INPUT_SIZE: 256 in your configuration file and change the following line in the code and pass centerCrop=False argument to the resize method to prevent center cropping: https://github.com/knazeri/edge-connect/blob/97c28c62ac54a59212cc9db4e78f36c5436c0b72/src/dataset.py#L141

If you don't have the high-resolution dataset, you can download 256x256 version from Places2 website under Data of Places-Extra69 section. You can also find a 256x256 version for the validation and test sets on the same page.

ljjcoder commented 5 years ago

@knazeri ,thanks for your reply!do you mean that it only needs to change the mask = self.resize(mask, imgh, imgw) to mask = self.resize(mask, imgh, imgw,centerCrop=False)? if only do this ,the mask resize to 256 directly but the img also use center crop. I guess it also need the def resize(self, img, height, width, centerCrop=True): to def resize(self, img, height, width, centerCrop=False):. Is it right?

ljjcoder commented 5 years ago

@knazeri I also ask what is different between Data of Places-Extra69 and Data of Places365-Challenge 2016? which one do you use? or both of them are used?

knazeri commented 5 years ago

@ljjcoder You don't need to change the method definition. Only change the method call. Of course that is if you have already downloaded the high-resolution version of the Places2 dataset. We have used the 256x256 version of Places2-Challenge 2016 full dataset for training!

ljjcoder commented 5 years ago

@knazeri yes, I download the high-resolution version of the Places2. I still feel confused that if I just change change the the mask = self.resize(mask, imgh, imgw) to mask = self.resize(mask, imgh, imgw,centerCrop=False), the original image still use center crop. it is same as your training data?

knazeri commented 5 years ago

@ljjcoder Honestly it doesn't really make any difference. You can either center crop an image or resize it to a fixed size. In either of these scenarios, the mask hides some part of the image and your network learns to inpaint the missing part! Like I mentioned before, our training dataset was the 256x256 version of the Places2 dataset.

anshen666 commented 4 years ago

你好,我最近也在跑这个代码。可以加你交流一下吗?我的微信:loveanshen 我的QQ:519838354 我的邮箱:519838354@qq.com 非常期待你百忙中的回复

napohou commented 2 years ago

Files in Data of Places-Extra69 section is just 1.4G(256*256), it contains only 98,721 images for training. Is it enough to train the model?