Closed dongyun-kim-arch closed 2 years ago
This is probably easiest to do in inference.py
. Currently, we only save the output image, but you can add a bit of code for saving the latent code of each input image.
For example, in run_on_batch
, you can add:
def run_on_batch(inputs, net, opts):
result_batch, result_latent = net(inputs, randomize_noise=False, return_latents=True, resize=opts.resize_outputs)
return result_batch, result_latent
Then, we can add:
# modify the call to `run_on_batch`
result_batch, result_latent = run_on_batch(input_cuda, net, opts)
...
# when going over each test sample in batch, extract the latent code
for i in range(opts.test_batch_size):
# some code
latent = result_latents[i]
# some code
# save the latent code
My inference script in ReStyle has something similar, so feel free to take a look there: https://github.com/yuval-alaluf/restyle-encoder/blob/main/scripts/inference_iterative.py
Hello!
I would like to ask how I can get only latent vector of input image when doing stylegan encoder. Is there any way to only run encoder part to extract latent vector?
Thanks!