Closed spot92 closed 4 years ago
The content image and output image are the same size during the stylization process. The style image is a slightly different size, and you can increase or decrease it's size with the -style_scale
parameter.
I think that you should be able to use the content image's original size for the output image by changing these lines.
Add a True
on this line:
content_image = preprocess(params.content_image, params.image_size, True).type(dtype)
https://github.com/ProGamerGov/neural-style-pt/blob/master/neural_style.py#L62
And add a short if statement on this line:
def preprocess(image_name, image_size, is_content_image=False):
image = Image.open(image_name).convert('RGB')
if is_content_image:
image_size = (image.height, image.width)
if type(image_size) is not tuple:
image_size = tuple([int((float(image_size) / max(image.size))*x) for x in (image.height, image.width)])
Loader = transforms.Compose([transforms.Resize(image_size), transforms.ToTensor()])
https://github.com/ProGamerGov/neural-style-pt/blob/master/neural_style.py#L332-L343
Awesome, thanks
Would it be possible to have the default output image size be the same size as the content image?