Closed FantasyJXF closed 5 years ago
I used STYLE_WEIGHTS to set weights for loss from different layers. You can incorporate the size of the feature maps into those layers to achieve the same effect.
In this way, the layer weights become part of hyper-parameters and can be picked empirically.
That's smart, use STYLE_WIGHTS of different style layers, it could get the similar result to a certain degree.
And you add tv loss, though not the standard format.
Only one thing, the transferred style image in your notebook didn't look so good, do you have any pretrained model which looks nice?
I don't think I still have any pretrained models in my computer. There are already a lot of newer and better style transfer algorithms around. Here's one that I've re-implemented in PyTorch: PyTorch Implementation of Style Transfer as Optimal Transport.
That's very nice!
The Wasserstein distance model acted like the original style transfer, it's a iterated update procedure, which takes much time?
It's just another loss function. See this notebook for explanation.
I use Tesla P40(24G graphic memory) to inference the style image, it's unbelievable that the model went out "RUN OUT OF MEMORY" error
That didn't happen in my case. Maybe the content image you use was too big?
In the original paper and the most stared TF implementation, the content loss and style loss should divide the feature map size (C,H,W).
Why don't you do this?