Closed onefish51 closed 3 years ago
yesterday I made a test to edit latent code by InterFaceGAN in InterFaceGAN code and ReStyle.
the image InterFaceGAN generate without edit :
the image InterFaceGAN generate with age edit by 'boundaries/stylegan_ffhq_age_w_boundary.npy' :
the input image that InterFaceGAN generate without edit :
the image ReStyle inversion by restyle_psp_ffhq_encode.pt
:
the output ReStyle generate with age edit :
the method edit age in ReStyle by InterFaceGAN:
psp.py/class pSp
bound_dir = 'boundaries/stylegan_ffhq_age_w_boundary.npy'
boundaries = np.load(bound_dir)
age_attr = -2.6 #@param {type:"slider", min:-3.0, max:3.0, step:0.1}
boundaries = boundaries * age_attr
latent_codes = torch.from_numpy(boundaries).type(torch.FloatTensor)
ws = latent_codes.to(self.opts.device)
print('ws.size =', ws.size())
truncation = TruncationModule(resolution=1024,
w_space_dim=512,
truncation_psi=0.7,
truncation_layers=8)
truncation = truncation.to(self.opts.device)
wps = truncation(ws)
print('codes.size =', codes.size())
print('wps.size =', wps.size())
codes = codes + wps
Is there anything wrong ?
To answer your first question, to edit the images we used the boundary direction in w. However, their official implementation finds directions in StyleGAN1 whereas we use StyleGAN2. So the directions provided in InterFaceGAN's repo will not be useful if you want to edit with ReStyle. (I think this may be why you don't see meaningful edits here) Instead, we used the implementation to find new directions in StyleGAN. You can find some directions in the following repo: https://github.com/omertov/encoder4editing/tree/main/editings
Then to edit the image, we used the following class which you may find useful as well: https://github.com/omertov/encoder4editing/blob/main/editings/latent_editor.py For example, you will notice the following function: https://github.com/omertov/encoder4editing/blob/2795aa93714e03ce4d8b70d4d803e4718f5c63d4/editings/latent_editor.py#L18 where:
latent
is the W+ latent code you wish to edit direction
is the direction of the edit (e.g., direction = np.load("path/to/boundary")
factor_range
is the range of the edit strength you want to apply (e.g., (-5, 5))Let me know if you have any additional questions
I see ! pretty good! How did you get the boundary (e.g., age.pt) ? Did you train StyleGAN2 by InterFaceGAN ?
Did you train StyleGAN2 by InterFaceGAN ?
Exactly. We used the official InterFaceGAN repo and made some small changes to support running on StyleGAN2.
Did you train StyleGAN2 by InterFaceGAN ?
Exactly. We used the official InterFaceGAN repo and made some small changes to support running on StyleGAN2.
OK,thank you! try!
In ReStyle’s paper Figure 8, you shown edit ReStyle’s inversion by InterFaceGAN .
did you edit latent in w_plus latent space by used the age weight in w latent space of InterFaceGAN ?
for example :
or you trained a new age weight in w_plus latent space of InterFaceGAN ?
I want to edit real image by ReStyle and InterFaceGAN. I think your inversion method is better than their method !