Open ljjcoder opened 5 years ago
@ljjcoder If you already have downloaded the high-resolution Places2 dataset, you can set INPUT_SIZE: 256
in your configuration file and change the following line in the code and pass centerCrop=False
argument to the resize
method to prevent center cropping:
https://github.com/knazeri/edge-connect/blob/97c28c62ac54a59212cc9db4e78f36c5436c0b72/src/dataset.py#L141
If you don't have the high-resolution dataset, you can download 256x256
version from Places2 website under Data of Places-Extra69 section. You can also find a 256x256
version for the validation and test sets on the same page.
@knazeri ,thanks for your reply!do you mean that it only needs to change the mask = self.resize(mask, imgh, imgw) to mask = self.resize(mask, imgh, imgw,centerCrop=False)? if only do this ,the mask resize to 256 directly but the img also use center crop. I guess it also need the def resize(self, img, height, width, centerCrop=True): to def resize(self, img, height, width, centerCrop=False):. Is it right?
@knazeri I also ask what is different between Data of Places-Extra69 and Data of Places365-Challenge 2016? which one do you use? or both of them are used?
@ljjcoder You don't need to change the method definition. Only change the method call. Of course that is if you have already downloaded the high-resolution version of the Places2 dataset. We have used the 256x256 version of Places2-Challenge 2016 full dataset for training!
@knazeri yes, I download the high-resolution version of the Places2. I still feel confused that if I just change change the the mask = self.resize(mask, imgh, imgw) to mask = self.resize(mask, imgh, imgw,centerCrop=False), the original image still use center crop. it is same as your training data?
@ljjcoder Honestly it doesn't really make any difference. You can either center crop an image or resize it to a fixed size. In either of these scenarios, the mask hides some part of the image and your network learns to inpaint the missing part! Like I mentioned before, our training dataset was the 256x256 version of the Places2 dataset.
你好,我最近也在跑这个代码。可以加你交流一下吗?我的微信:loveanshen 我的QQ:519838354 我的邮箱:519838354@qq.com 非常期待你百忙中的回复
Files in Data of Places-Extra69 section is just 1.4G(256*256), it contains only 98,721 images for training. Is it enough to train the model?
Thank you for your excellent work. In your paper, you introduced how to get 256x256 images from original images on celebA and Paris StreetView. but for places2, how do you get the 256x256 images?