eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.2k stars 568 forks source link

how can I get only latent vector ? #228

Closed dongyun-kim-arch closed 2 years ago

dongyun-kim-arch commented 3 years ago

Hello!

I would like to ask how I can get only latent vector of input image when doing stylegan encoder. Is there any way to only run encoder part to extract latent vector?

Thanks!

yuval-alaluf commented 3 years ago

This is probably easiest to do in inference.py. Currently, we only save the output image, but you can add a bit of code for saving the latent code of each input image. For example, in run_on_batch, you can add:

def run_on_batch(inputs, net, opts):
    result_batch, result_latent = net(inputs, randomize_noise=False, return_latents=True, resize=opts.resize_outputs)
    return result_batch, result_latent

Then, we can add:

# modify the call to `run_on_batch`
result_batch, result_latent = run_on_batch(input_cuda, net, opts)
...
# when going over each test sample in batch, extract the latent code
for i in range(opts.test_batch_size):
    # some code     
    latent = result_latents[i]
    # some code 
    # save the latent code

My inference script in ReStyle has something similar, so feel free to take a look there: https://github.com/yuval-alaluf/restyle-encoder/blob/main/scripts/inference_iterative.py