Closed AddASecond closed 4 years ago
Hello, I use LIA to extract latent codes from input images and then use them as inputs for interfacegan. However, I cannot generate results of the same input images of LIA. I used pre-trained models from LIA and interfacegan. Do I need to process extracted latent codes? Or can you provide codes and models for Fig.11? Thank you.
E, _, _, Gs, _ = load_pkl(args.restore_path)
real = tf.placeholder('float32', [None, 3, args.image_size, args.image_size], name='real_image')
encoder_w = E.get_output_for(real, phase=False)
sess = tf.get_default_session()
latent_code = sess.run(encoder_w, feed_dict={real: input_image})
np.save('%s/%s.npy' % (save_dir, im_name), latent_code)
python edit.py \
-m pggan_celebahq \
-b boundaries/pggan_celebahq_smile_boundary.npy \
-i $INPUT_LATENT_CODES_PATH \
-o results/from_latent_code/pggan/celebahq/smile_editing
python edit.py \
-m stylegan_ffhq \
-b boundaries/stylegan_ffhq_smile_boundary.npy \
-i $INPUT_LATENT_CODES_PATH \
-o results/from_latent_code/stylegan/ffhq/smile_editing
You are using the official PGGAN (pggan_celebahq) and StyleGAN (stylegan_ffhq) models for editing, but LIA uses its own model for inversion. Please find the boundaries for LIA model and use them for manipulation. This repo only contains the boundaries and editing script w.r.t. PGGAN and StyleGAN.
Can you give a guidance to reproduce results in Fig.11? Thank you.
utils/manipulator.py
)
Dear author, after checking this repository, I have found that there isn't included a encoder-decoder model as the paper tests in Figure 11. WILL this release in the near future?