Closed engharat closed 6 years ago
@engharat Thanks for your interests. It looks weird. I think your modification is right. Let me describe how I did that in my torch code.
In the _featurewct function (not _feature_swapwhiten) of the _testwct.lua file, just return the _whitencontentFeature (as you did):
local tFeature = whiten_contentFeature:resize(sg[1], sg[2], sg[3])
return tFeature
In the styleTransfer function of the _testwct.lua file (the whitened results in the paper are obtained by using the encoder4-decoder4 only):
local cF4 = vgg4:forward(content):clone()
local sF4 = vgg4:forward(style):clone() (you can remove the style feature input accordingly)
local csF4 = feature_wct(cF4, sF4)
--csF4 = opt.alpha * csF4 + (1.0-opt.alpha) * cF4 (originally we have an "alpha" here to blend the csF4 with the content feature cF4, you can remove this line or set alpha=1)
local Im4 = decoder4:forward(csF4)
return Im4
I did a quick experiment and here is my test result:
Input:
Output:
Hope you can fix it :)
Thanks for the very quick reply! Going down on the code another time, I catched a stupid mistake I made. Using all encoder-decoder from 5 to 1, I'm getting: That I suppose being the correct result. I'll try using only encoder/decoder 4, to see if the whitened image keep more information about the original image - using encoder-decoder from 5 to 1 seems to keep very little features from original image.
@engharat Great! I use encoder/decoder 4 only to just illustrate the whitening idea. Multi-level is for matching style feature statistics at different VGG layers :)
Hi, I'm trying to get the whitened content images without applying any target style, as you have shown as example in the paper. I'm working on the pytorch version, and to make things simple I have simply removed the code portion that I thought would apply the style transfer, returning whiten_CF instead of targetFeature: ` ...... whiten_cF = torch.mm(step2,cF)
That should correspond in your original torch code at removing those lines:
local swap_latent = swap:forward(whiten_contentFeature:resize(sg[1], sg[2], sg[3])):clone() local swap_latent1 = swap_latent:view(sg[1], sg[2]*sg[3]) targetFeature = (s_v[{{},{1,k_s}}]:cuda()) * (torch.diag(s_d1:cuda())) * (s_v[{{},{1,k_s}}]:t():cuda()) * swap_latent1
Leaving in the function as last lines something like that: `whiten_contentFeature = (c_v[{{},{1,k_c}}]:cuda()) torch.diag(c_d:cuda()) (c_v[{{},{1,k_c}}]:t():cuda()) *contentFeature1 local Whiten_contentFeature= whiten_contentFeature:resize(sg[1], sg[2], sg[3])
` I tried this modification in the pytorch version and the output image is not whitened, but results the same as the content image. Could you suggest me how fix it?