omertov / encoder4editing

Official implementation of "Designing an Encoder for StyleGAN Image Manipulation" (SIGGRAPH 2021) https://arxiv.org/abs/2102.02766
MIT License
937 stars 154 forks source link

inversion doesn't look like the face of img source #2

Closed molo32 closed 3 years ago

molo32 commented 3 years ago

inversion doesn't look like the face of img source,How can I make it look more like img source?

omertov commented 3 years ago

Hi @molo32, Can you provide further details? have you performed the required face alignment?

molo32 commented 3 years ago
image_path = "/content/8.jpg"
original_image = Image.open(image_path)
original_image = original_image.convert("RGB")
input_image = run_alignment(image_path)

def run_on_batch(inputs, net):
    images, latents = net(inputs.to("cuda").float(), randomize_noise=False, return_latents=True)
    if experiment_type == 'cars_encode':
        images = images[:, :, 32:224, :]
    return images, latents

def display_alongside_source_image(result_image, source_image):
    res = np.concatenate([np.array(source_image.resize(resize_dims)),
                          np.array(result_image.resize(resize_dims))], axis=1)
    return Image.fromarray(res)

input_image.resize(resize_dims)
img_transforms = EXPERIMENT_ARGS['transform']
transformed_image = img_transforms(input_image)
with torch.no_grad():
    tic = time.time()
    images, latents = run_on_batch(transformed_image.unsqueeze(0), net)
    result_image, latent = images[0], latents[0]
    toc = time.time()
    print('Inference took {:.4f} seconds.'.format(toc - tic))
# Display inversion:
display_alongside_source_image(tensor2im(result_image), input_image)

download

omertov commented 3 years ago

It seems like you run our encoder correctly.

Generally speaking, our pretrained e4e encoder is specifically designed to balance the tradeoffs existing in the StyleGAN's latent space (See our paper for further details and examples). By doing so, we lose some reconstruction accuracy to gain more editable latent codes (that can be better used by other existing latent-space manipulation techniques, StyleFlow for example) compared to other inversion methods.

If exact reconstruction is what you seek, direct optimization will always yield the best results, or alternatively, you can control the tradeoff yourself according to your needs. For example, you can train the encoder to favor reconstruction over editablity by not using the latent codes discriminator or by tuning the progressive training parameters.

woctezuma commented 3 years ago

Relevant: https://github.com/rolux/stylegan2encoder/issues/2#issuecomment-570191777 (posted in January 2020)

It took me a while to appreciate the fact that encoder output can have high visual quality, but bad semantics.

That is the kind of idea that you find in the paper: a good inversion is the result of a trade-off between i) perception (visual quality in terms of a realistic output), ii) distortion (visual quality in terms of an output close to the input), and iii) edit-ability (semantics).

If you look at the projected face of Angelina Jolie, you can see that it looks like a human face (perception), it slightly looks like Angelina Jolie (distortion), and it should hopefully change according to plan if you try to edit it (edit-ability).

Closely related, if you want to get an idea of what to expect from projections as implemented:

then you can check the results shown in the README of my repository: https://github.com/woctezuma/stylegan2-projecting-images Basically, the more constrained the projection, the higher the distortion, but the output should behave better. With encoder4editing, one has access to a smart way to constrain the projection. Plus, the projection is fast.