genforce / interfacegan

[CVPR 2020] Interpreting the Latent Space of GANs for Semantic Face Editing
https://genforce.github.io/interfacegan/
MIT License
1.51k stars 281 forks source link

How to manipulate real faces? #44

Closed AddASecond closed 4 years ago

AddASecond commented 4 years ago

Dear author, after checking this repository, I have found that there isn't included a encoder-decoder model as the paper tests in Figure 11. WILL this release in the near future?

ShenYujun commented 4 years ago

Please refer to this repo.

ltnghia commented 4 years ago

Hello, I use LIA to extract latent codes from input images and then use them as inputs for interfacegan. However, I cannot generate results of the same input images of LIA. I used pre-trained models from LIA and interfacegan. Do I need to process extracted latent codes? Or can you provide codes and models for Fig.11? Thank you.

E, _, _, Gs, _ = load_pkl(args.restore_path)
real = tf.placeholder('float32', [None, 3, args.image_size, args.image_size], name='real_image')
encoder_w = E.get_output_for(real, phase=False)
sess = tf.get_default_session()
latent_code = sess.run(encoder_w, feed_dict={real: input_image})
np.save('%s/%s.npy' % (save_dir, im_name), latent_code)
python edit.py \
    -m pggan_celebahq \
    -b boundaries/pggan_celebahq_smile_boundary.npy \
    -i $INPUT_LATENT_CODES_PATH \
    -o results/from_latent_code/pggan/celebahq/smile_editing
python edit.py \
    -m stylegan_ffhq \
    -b boundaries/stylegan_ffhq_smile_boundary.npy \
    -i $INPUT_LATENT_CODES_PATH \
    -o results/from_latent_code/stylegan/ffhq/smile_editing
ShenYujun commented 4 years ago

You are using the official PGGAN (pggan_celebahq) and StyleGAN (stylegan_ffhq) models for editing, but LIA uses its own model for inversion. Please find the boundaries for LIA model and use them for manipulation. This repo only contains the boundaries and editing script w.r.t. PGGAN and StyleGAN.

ltnghia commented 4 years ago

Can you give a guidance to reproduce results in Fig.11? Thank you.

ShenYujun commented 4 years ago
  1. Find your own boundaries with LIA model (this process is exactly the same as that of PGGAN and StyleGAN).
  2. Invert real images with LIA encoder and get the inverted codes.
  3. Move the inverted codes towards the boundaries (use utils/manipulator.py)
  4. Use the LIA generator to produce images with the manipulated codes.