pacifinapacific / StyleGAN_LatentEditor

240 stars 52 forks source link

About noise optimize in image2styleGAN++ #1

Open rainsoulsrx opened 4 years ago

rainsoulsrx commented 4 years ago

Hi, thank you for your good work, I have a question, In paper image2styleGAN++, the author mentioned that they both optimize w and n(noise) , but in your code, I only find w, and find nothing about noise optimize process

pacifinapacific commented 4 years ago

Thanks for your questions. As you say, optimizing n should produce better images. However, I was satisfied with the quality of the image by the optimization of w only . Also, to give n to the optimizer, the styleGAN implementation needed to be slightly modified, so I skipped that process. Sorry

yosefyehoshua commented 4 years ago

Hi, you said that for optimize n (noise) you need to slightly modified styleGAN implementation, I'm trying to add this optimization but can't see why & where i need to modify the styleGAN code. I would be happy for some advice :)

pacifinapacific commented 4 years ago

Noise is generated dynamically in the layer of StyleGAN. To pass it to the optimizer, you need to keep it as a parameter in the class But I don't know a good idea https://github.com/pacifinapacific/StyleGAN_LatentEditor/blob/b4eb124f9abb7478120078e0f9e5888db65d34fe/stylegan_layers.py#L118

yosefyehoshua commented 4 years ago

thanks for the answer!

maybe i got this wrong.. but in stylegan the noise generated is constant, so passing it to the optimizer like in: https://github.com/pacifinapacific/StyleGAN_LatentEditor/blob/b4eb124f9abb7478120078e0f9e5888db65d34fe/image_crossover.py#L69-L70

so that: optimizer=optim.Adam({**self.noise**},lr=0.01,betas=(0.9,0.999),eps=1e-8) feels weird.

I would be happy it you could shed some light :)

GreenLimeSia commented 4 years ago

@yosefyehoshua maybe you should do this:

noise_params = G.static_noise(trainable=True)
dlatent=torch.zeros((1,18,512),requires_grad=True,device=device) 
optimizer_dlatent = optim.Adam([dlatent], lr=0.01, betas=(0.9, 0.999), eps=1e-8)
optimizer_noise = optim.Adam(noise_params, lr=0.01, betas=(0.9, 0.999), eps=1e-8)

Iamge2stylegan++ paper recommends alternative recommend to use alternating optimization, but each set of variables is only optimized once. First, optimize w, then n. so, we should adopt this recommendation. The aim of doing this is that we can optimize the latent with noise trainable. noise_params is a list of noise tensors.

wold21 commented 4 years ago

@yosefyehoshua maybe you should do this:

noise_params = G.static_noise(trainable=True)
dlatent=torch.zeros((1,18,512),requires_grad=True,device=device) 
optimizer_dlatent = optim.Adam([dlatent], lr=0.01, betas=(0.9, 0.999), eps=1e-8)
optimizer_noise = optim.Adam(noise_params, lr=0.01, betas=(0.9, 0.999), eps=1e-8)

Iamge2stylegan++ paper recommends alternative recommend to use alternating optimization, but each set of variables is only optimized once. First, optimize w, then n. so, we should adopt this recommendation. The aim of doing this is that we can optimize the latent with noise trainable. noise_params is a list of noise tensors.

Can it be applied directly to image2stylegan too? I don't know how to add it.