orpatashnik / StyleCLIP

Official Implementation for "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" (ICCV 2021 Oral)
MIT License
4k stars 560 forks source link

Would you be able to share the training params for style space latent mapper? #70

Open asrielh opened 3 years ago

asrielh commented 3 years ago

Hi developers,

I saw that you have updated in August with style space optimization and latent mapper. However it seems in the sample codes and notebooks, only optimization part has the option of choosing style space. I tried training style space latent mapper with the code provided in coach.py. However the result from stylespace is way worse than w+ when even purely using the sample text prompt in given in the repo (purple hairs, amazed, etc), mainly because 1) I don't know how to change the loss weights for each of the loss function. I guess the default one is for w+ space and 2) it seems there is some inconsistency in the training code regarding style space. For an example, I need to modify the config_datasets() function in order to correctly generate the stylespace latent dataset loader. Is it because this part of the code is not totally ready yet?

asrielh commented 3 years ago

Actually even when I played with the official colab (http://colab.research.google.com/github/orpatashnik/StyleCLIP/blob/main/notebooks/optimization_playground.ipynb) with the default parameter and with the stylespace on, I cannot replicate the result even with 400 steps. The output image looks literally identical to input image on "A person with purple hair." I think the l2_lambda and id_lambda must be different

jweihe commented 2 years ago

hi,gays~ have you found why this happened?i met same problem