daa233 / generative-inpainting-pytorch

A PyTorch reimplementation for paper Generative Image Inpainting with Contextual Attention (https://arxiv.org/abs/1801.07892)
MIT License
472 stars 97 forks source link

question about test #35

Open zhengbowei opened 4 years ago

zhengbowei commented 4 years ago

Dear author: Thanks for your re-implementation, it's helpful! Now I have a little question for you: In the training phase, the training image will be scaled to 256*256, the code in dataset.py is: if self.random_crop: imgw, imgh = img.size if imgh < self.image_shape[0] or imgw < self.image_shape[1]: img = transforms.Resize(min(self.image_shape))(img) img = transforms.RandomCrop(self.image_shape)(img) else: img = transforms.Resize(self.image_shape)(img) img = transforms.RandomCrop(self.image_shape)(img)

In the testing phase, the testing image will be scaled to 256*256, the code in test_single.py is: x = transforms.Resize(config['image_shape'][:-1])(x) x = transforms.CenterCrop(config['image_shape'][:-1])(x) mask = transforms.Resize(config['image_shape'][:-1])(mask) mask = transforms.CenterCrop(config['image_shape'][:-1])(mask)

The scaling standards are the same between them? Thank you for your answer。

daa233 commented 4 years ago

You could try the same scale standard.

It should support arbitrary sizes of inputs during testing phrase but there may be some bugs now.