eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.19k stars 570 forks source link

Generating images from random latent vectors #64

Closed genawass closed 3 years ago

genawass commented 3 years ago

Hi, I want to generate images from random latent vectors as is performed in stylegan2, without the encoding step. How it can be done with pSp? Thanks

yuval-alaluf commented 3 years ago

Please see issue https://github.com/eladrich/pixel2style2pixel/issues/12

genawass commented 3 years ago

Please see issue #12

Yeah, I see and if I want to use previously projected latent vectors - they are 8x512 and not 1x512 as appear in:

n_images_to_generate = 10 generatedimages = [] for in range(n_images_to_generate): random_vec = np.random.randn(1, 512).astype('float32') randomimage, = net(torch.from_numpy(random_vec).to("cuda"), input_code=True, return_latents=True) generated_images.append(random_image)

yuval-alaluf commented 3 years ago

If you mean 18x512, then this is similar to simply calling the forward function of pSp without passing through self.encoder first. Then, the codes should simply be passed through self.decoder to generate the image.

genawass commented 3 years ago

OK, got it, thanks.