Open meowcakes opened 6 years ago
Thanks for the comment.
Can you point out the description in the paper about fixing the encoder while updating cLR-GAN? I cannot find it and updating both the encoder and the generator makes more sense to me.
Hi, thanks for your reply. It is in section 4, under subtitle "Training details":
We only update G for the L1 loss L1latent(G, E) on the latent code (Equation 7), while keeping E fixed. We found optimizing G and E simultaneously for the loss would encourage G and E to hide the information of the latent code without learning meaningful modes.
I also just noticed another difference. In the same section they write:
For the encoder, only the predicted mean is used in cLR-GAN.
But it seems in your code you are sampling from the approximate posterior.
Thanks
I couldn't find it because I followed the old version of the paper.
I'll test these two changes and update the repo later.
Thanks so much.
Hello, In the original paper the authors state they keep the encoder fixed when performing the update step for cLR-GAN. However, in your code it seems that it is being updated. Are you aware of this?
Thanks