williamyang1991 / StyleGANEX

[ICCV 2023] StyleGANEX: StyleGAN-Based Manipulation Beyond Cropped Aligned Faces
Other
510 stars 36 forks source link

How did you find the editing_w styles for style transfer? #6

Closed hcl14 closed 1 year ago

hcl14 commented 1 year ago

I tried to apply my styles found through StyleCLIP with shape [18,512] to codes variable in psp forward function, but they don't seem to work in hair/age or inversion (after optimization) networks. Even though generator is standard Stylegan. Seems like first_layer_feats from encoder suppress my StyleCLIP edit. But I see that random styles obtained through mapping network from 512 random vectors work in your example. Can I use StyleCLIP or somehow obtain my own styles?

williamyang1991 commented 1 year ago

The fast forward hair/age network is trained with the specific hair/age editing_w. You can only use the provided editing_w. For optimization, I tested with editing_w from InterFace-GAN and LowRankGAN. Both works. I think editing_w from StyleCLIP should also work. I don't think it is the probelm of first_layer_feats. You can

  1. apply a higher weight of editing_w
  2. try to feed first_layer_feature=None to pspex.decoder to switch StyleGANEX to the original StyleGAN to test whether your StyleCLIP editing_w works.

https://github.com/williamyang1991/StyleGANEX/blob/80e9e45f985c14f0ca79a4150da33cfb2e27246b/models/stylegan2/model.py#L592-L596