Puzer / stylegan-encoder

StyleGAN Encoder - converts real images to latent space
Other
1.07k stars 166 forks source link

why optimize dlatent rather than qlatent? #9

Open jcpeterson opened 5 years ago

jcpeterson commented 5 years ago

The dimensionality is dlatent is 9x larger, sampling isn't as simple, interpolations are worse, and mapping to smile directions for example needs more data.

Definiter commented 5 years ago

+1. Tried to interpolate dlatent and the results didn't seem natural at all.

vu0tran2 commented 5 years ago

Linking to reddit comment by author.

So StyleGAN generator actually contains 2 components:

Generator:

qlatent = normally distributed noise which have shape=(512)

dlatent = mapping_network(qlatent) = shape=(18, 512)

where mapping_network - a fully connected network which transforms qlatent to dlatent

generator(mapping_network(qlatent)) = image

So during training we optimize dlatent instead of qlatent. Optimiziong of qlatent leads to bad results (I can elaborate on it). qlatent is used for features-wise transformation of convolution layers of generator https://distill.pub/2018/feature-wise-transformations/

2) dlatent + multiplier * logreg_coeff; Yes, but I use raw coefficients from logreg, so it doesn't matter are they positive or not.

3) Yes. It somehow works and we can gen relatively similar faces, but less details are saved. It still in progress.
jcpeterson commented 5 years ago

@vu0tran2 Yes I've seen but the "elaboration" was never given. In principle I don't see why it should be worse.

ndahlquist commented 5 years ago

I've done some experiments with optimizing dlatent vs qlatent. I've observed that when optimizing qlatent against a real image (I tried a few images of celebrities), the result does not converge to the desired target image. However, when optimizing qlatent against an image generated by sampling from qlatent space, the reconstruction converges quickly.

My intuition is that the space of qlatent is does not represent all human faces. Since qlatent has lower dimensionality than dlatent, it is intuitive to me (pigeonhole principle) that it is capable of representing fewer images.

danielkaifeng commented 4 years ago

I've done some experiments with optimizing dlatent vs qlatent. I've observed that when optimizing qlatent against a real image (I tried a few images of celebrities), the result does not converge to the desired target image. However, when optimizing qlatent against an image generated by sampling from qlatent space, the reconstruction converges quickly.

My intuition is that the space of qlatent is does not represent all human faces. Since qlatent has lower dimensionality than dlatent, it is intuitive to me (pigeonhole principle) that it is capable of representing fewer images.

I tried to train the same encoding process and find the same problem. Did you align the celebrities images? Since the generated images face landmarks are standard, which means eyes, mouth of all faces are exactly in the same place among all pictures.

I make lots of argumentation on generated images, the encoded result for real images become better but still far away from the same face.