Dear author:
Thanks for your re-implementation, it's helpful! Now I have a little question for you:
In the training phase, the training image will be scaled to 256*256, the code in dataset.py is:
if self.random_crop:
imgw, imgh = img.size
if imgh < self.image_shape[0] or imgw < self.image_shape[1]:
img = transforms.Resize(min(self.image_shape))(img)
img = transforms.RandomCrop(self.image_shape)(img)
else:
img = transforms.Resize(self.image_shape)(img)
img = transforms.RandomCrop(self.image_shape)(img)
In the testing phase, the testing image will be scaled to 256*256, the code in test_single.py is:
x = transforms.Resize(config['image_shape'][:-1])(x)
x = transforms.CenterCrop(config['image_shape'][:-1])(x)
mask = transforms.Resize(config['image_shape'][:-1])(mask)
mask = transforms.CenterCrop(config['image_shape'][:-1])(mask)
The scaling standards are the same between them?
Thank you for your answer。
Dear author: Thanks for your re-implementation, it's helpful! Now I have a little question for you: In the training phase, the training image will be scaled to 256*256, the code in dataset.py is: if self.random_crop: imgw, imgh = img.size if imgh < self.image_shape[0] or imgw < self.image_shape[1]: img = transforms.Resize(min(self.image_shape))(img) img = transforms.RandomCrop(self.image_shape)(img) else: img = transforms.Resize(self.image_shape)(img) img = transforms.RandomCrop(self.image_shape)(img)
In the testing phase, the testing image will be scaled to 256*256, the code in test_single.py is: x = transforms.Resize(config['image_shape'][:-1])(x) x = transforms.CenterCrop(config['image_shape'][:-1])(x) mask = transforms.Resize(config['image_shape'][:-1])(mask) mask = transforms.CenterCrop(config['image_shape'][:-1])(mask)
The scaling standards are the same between them? Thank you for your answer。