Closed genawass closed 3 years ago
Please see issue https://github.com/eladrich/pixel2style2pixel/issues/12
Please see issue #12
Yeah, I see and if I want to use previously projected latent vectors - they are 8x512 and not 1x512 as appear in:
n_images_to_generate = 10 generatedimages = [] for in range(n_images_to_generate): random_vec = np.random.randn(1, 512).astype('float32') randomimage, = net(torch.from_numpy(random_vec).to("cuda"), input_code=True, return_latents=True) generated_images.append(random_image)
If you mean 18x512
, then this is similar to simply calling the forward
function of pSp without passing through self.encoder
first. Then, the codes should simply be passed through self.decoder
to generate the image.
OK, got it, thanks.
Hi, I want to generate images from random latent vectors as is performed in stylegan2, without the encoding step. How it can be done with pSp? Thanks