danielroich / PTI

Official Implementation for "Pivotal Tuning for Latent-based editing of Real Images" (ACM TOG 2022) https://arxiv.org/abs/2106.05744
MIT License
897 stars 112 forks source link

how to get more editing directions #15

Closed JK737353 closed 3 years ago

JK737353 commented 3 years ago

Hi, thanks for your great work. I notice that you said the editing directions you uploaded are trained on the pretrained StyleGAN. If I want more editing directions, what should I do? Thank you.

danielroich commented 3 years ago

Hey @JK737353 There are several ways to edit images using StyleGAN after the initial inversion done by PTI. You can use directions from InterfaceGAN, GANSpace, Sefa, StyleCLIP, and many more amazing works.

Creating an editing direction on your own is a feasible task, nevertheless not an easy one. Therefore I would suggest searching for a framework from the ones I have mentioned above which manipulates the latent code to your desired need, And if you don't find one, create one yourself using the InterfaceGAN framework for example

See https://github.com/orpatashnik/StyleCLIP or https://github.com/genforce/interfacegan for more information about editing frameworks

molo32 commented 3 years ago

hi danielroich, can you tell me where to find the interfacegan latents for gender and glasses, the repository only has age smile pose.

danielroich commented 3 years ago

Hey @molo32 , I personally do not possess any more editing directions for InterfaceGAN. You can try to create ones on your own. See https://github.com/genforce/interfacegan for more details.

A different option would be to use other editing techniques - such as GanSPACE, SeFa, StyleFlow and StyleCLIP Although I did not upload support for SeFa and StyleFlow, I have received messages from different users which combined the methods successfully

shankarsivarajan commented 3 years ago

These (also used here) seem to work well.

molo32 commented 3 years ago

try to use the boundary of ffhq that are in https://github.com/genforce/interfacegan/tree/master/boundaries i turned them npy into pt but it seems that it does not work, it modifies the face but it does it very badly. to go from npy to pt just use torch.load and np.load. am i missing something?

danielroich commented 3 years ago

Hey @molo32 , The reason why this does not work is because those boundaries were found based on StyleGAN1. In this project we use StyleGAN2, which has a completely different latent space than StyleGAN1. Hence also different boundaries

The boundaries I have uploaded were trained on StyleGAN2, that's why they work