titu1994 / Neural-Style-Transfer

Keras Implementation of Neural Style Transfer from the paper "A Neural Algorithm of Artistic Style" (http://arxiv.org/abs/1508.06576) in Keras 2.0+
Apache License 2.0
2.26k stars 481 forks source link

aspect ratio #22

Closed ink1 closed 7 years ago

ink1 commented 7 years ago

Hi, nice code and thank you for sharing it! Why do you say (and require) that Gram matrix is square? In fact your code seems to work even when this requirement is dropped out (thanks for keeping widths and heights separate).

titu1994 commented 7 years ago

The requirement is not dropped. The gram matrix is always a square matrix.

Therefore to compute the gram matrix I resize the image to a square shape. Then perform all the vgg loss and style loss and so on using LBFGS. The final output is a square image of same size as the gram matrix.

I then resize this gram matrix to preserve aspect ratio of the original image. If you wish to see a square image, set --maintain_aspect_ratio="False"

ink1 commented 7 years ago

Sorry, that was not quite what I wanted to ask. I'm asking about image aspect ratio. I think it can be arbitrary throughout all processing steps.

Of course the Gram matrix is square because it is a cross correlation matrix. What I don't understand is why you require the width and height of a processed image to be equal. Line 130: assert img_height == img_width, 'Due to the use of the Gram matrix, width and height must match.'

When I remove this requirement the code still works (as it should!)

titu1994 commented 7 years ago

That's a redundant check. I resize the image to the gram matrix size (400x400) by default, and then perform this check. The check can safely be removed, since we are rescaling the image to the gram matrix size just above that.

ink1 commented 7 years ago

Yes, I can see that. What I'm asking is why you are doing img_width = img_height = args.img_size instead of, for example, something like img_width = args.img_width img_height = args.img_height (I also realise that you do not have the two above options atm)

What has the Gram matrix to do with the aspect ration of your image?

titu1994 commented 7 years ago

You are correct in the fact that the Gram matrix has nothing to do with the aspect ratio of the image.

In the original keras script, which this script is based off of, the author made sure to assert the width and height of the imported image and style were the exact same size (The comment about image needing to be same for gram matrix size has since been removed, so I will do the same).

Therefore I removed that check and performed style transfer on a 400 x 640 image as content and style. The result was worse, even after several hundred iterations.

For comparison, the first image is with 400 x 400 content image and style image : moon lake - 400x400

Whereas the second image is with 400x640 image as input :

moon lake - 400x640

Consider the upper image with sharp features similar to the turbulence pattern from the Starry Night and the bottom image with less distinct patterns and patches of poor style transfer especially in the lower left portion of the image.

All things considered, I don't think I will preserve the aspect ratio of the loaded content and style image.

ink1 commented 7 years ago

I don't know how you are getting these results. I observe a lot less colour difference in these two resolutions using default settings over 10 iterations (VGG16). I manually rescaled both input and style to 640x400 and 400x400 for two tests in order to avoid rescaling inside of the code. Can you try the same? 640x400 out night_at_iteration_10 400x400 upscaled to 640x400 out 400_at_iteration_10 640

titu1994 commented 7 years ago

The results seem close now. I'm travelling for a few days and won't have access to my laptop.

The 640x400 seems to preserve the text at the top right far better, along with similar style transfer in other regions. My results must be due to some other error.

Feel free to add a PR

titu1994 commented 7 years ago

@ink1, I found the mistake. I used the older Network.py to test my image (since it is faster), but it sacrifices quality for speed. When I switched to INetwork.py I was able to replicate your results.

As of commit 6a08eaa51d0f169607c2b5e21d40b25f92078576, the content image and style image are scaled to the content aspect ratio before passing onto the VGG network. This will drastically increase execution time (INetwork used to take 14 seconds, now it takes 23 seconds per epoch), but delivers more precise results.

Thanks for raising the issue. The results now seems to be closer to the DeepArt.io results.