Open sarmientoj24 opened 3 years ago
Hi @sarmientoj24 ,
Which StyleCLIP method are you trying to use?
Hi @orpatashnik I am using all three methods. It seems like I need something similar to ArcFace for non-faces in order to get IdentityLoss.
Hi @sarmientoj24 ,
Regarding the ID loss - I think that using only the L2 should be enough for both the optimization and mapper. Human are usually most sensitive to human faces so in that domain we take extra effort in preserving the identity.
Regarding no observed changes with the optimization - did you try playing with the hyperparams? This method is sometimes sensitive to the hyperparams.
Actually, I didn't try the mapper method with other domains, but I think it is not supposed to be so hard to adapt the code. You need to change the StyleGAN resolution, and to cancel the identity loss. Did any further problm occured?
If the image was generated by StyleGAN you can save the latent code and give it as the "--latent_path" argument.
I had this with "a bus with square wheels" with the default parameter and 300 epochs. Nothing really changed.
Also, how do i save the latent code?
我有这个带有默认参数和 300 个 epoch 的“带有方形轮子的公共汽车”。没有什么真正改变。
May I ask whether your problem has been solved? I have also been studying the combination of Clip and non-face generation model recently.
Questions: