yuval-alaluf / restyle-encoder

Official Implementation for "ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement" (ICCV 2021) https://arxiv.org/abs/2104.02699
https://yuval-alaluf.github.io/restyle-encoder/
MIT License
1.03k stars 156 forks source link

Edit ReStyle’s inversion by InterFaceGAN #28

Closed onefish51 closed 3 years ago

onefish51 commented 3 years ago

In ReStyle’s paper Figure 8, you shown edit ReStyle’s inversion by InterFaceGAN .

did you edit latent in w_plus latent space by used the age weight in w latent space of InterFaceGAN ?

for example :

latent_codes = np.load('boundaries/stylegan_ffhq_age_w_boundary.npy')
ws = torch.from_numpy(latent_codes).type(torch.FloatTensor)
ws = ws.to(self.run_device)
wps = self.model.truncation(ws)
results['wp'] = self.get_value(wps)

or you trained a new age weight in w_plus latent space of InterFaceGAN ?

I want to edit real image by ReStyle and InterFaceGAN. I think your inversion method is better than their method !

onefish51 commented 3 years ago

yesterday I made a test to edit latent code by InterFaceGAN in InterFaceGAN code and ReStyle.

InterFaceGAN:

the image InterFaceGAN generate without edit : images

the image InterFaceGAN generate with age edit by 'boundaries/stylegan_ffhq_age_w_boundary.npy' : new_images

ReStyle

the input image that InterFaceGAN generate without edit : images

the image ReStyle inversion by restyle_psp_ffhq_encode.pt: inversion

the output ReStyle generate with age edit : age

the method edit age in ReStyle by InterFaceGAN: psp.py/class pSp

bound_dir = 'boundaries/stylegan_ffhq_age_w_boundary.npy'
boundaries = np.load(bound_dir)
age_attr = -2.6 #@param {type:"slider", min:-3.0, max:3.0, step:0.1}
boundaries = boundaries * age_attr
latent_codes = torch.from_numpy(boundaries).type(torch.FloatTensor)
ws = latent_codes.to(self.opts.device)
print('ws.size =', ws.size())

truncation = TruncationModule(resolution=1024,
                         w_space_dim=512,
                         truncation_psi=0.7,
                         truncation_layers=8)
truncation = truncation.to(self.opts.device)
wps = truncation(ws)

print('codes.size =', codes.size())
print('wps.size =', wps.size())

codes = codes + wps

Is there anything wrong ?

yuval-alaluf commented 3 years ago

To answer your first question, to edit the images we used the boundary direction in w. However, their official implementation finds directions in StyleGAN1 whereas we use StyleGAN2. So the directions provided in InterFaceGAN's repo will not be useful if you want to edit with ReStyle. (I think this may be why you don't see meaningful edits here) Instead, we used the implementation to find new directions in StyleGAN. You can find some directions in the following repo: https://github.com/omertov/encoder4editing/tree/main/editings

Then to edit the image, we used the following class which you may find useful as well: https://github.com/omertov/encoder4editing/blob/main/editings/latent_editor.py For example, you will notice the following function: https://github.com/omertov/encoder4editing/blob/2795aa93714e03ce4d8b70d4d803e4718f5c63d4/editings/latent_editor.py#L18 where:

Let me know if you have any additional questions

onefish51 commented 3 years ago

I see ! pretty good! How did you get the boundary (e.g., age.pt) ? Did you train StyleGAN2 by InterFaceGAN ?

yuval-alaluf commented 3 years ago

Did you train StyleGAN2 by InterFaceGAN ?

Exactly. We used the official InterFaceGAN repo and made some small changes to support running on StyleGAN2.

onefish51 commented 3 years ago

Did you train StyleGAN2 by InterFaceGAN ?

Exactly. We used the official InterFaceGAN repo and made some small changes to support running on StyleGAN2.

OK,thank you! try!