Closed loboere closed 3 years ago
Hi @loboere. Yes, it is possible.
You missed the mapping network. The synthesis network does not accept the z vector sampled from the normal distribution. this vector is first passed to a mapping network which outputs a vector in the W space, and then this vector is passed to the synthesis network.
The code you are interested in is: z = torch.randn([1, new_G.z_dim]).cuda() w = new_G.mapping(z, None, truncation_psi=0.5, truncation_cutoff=8) image = new_G.synthesis(w, noise_mode='const', force_fp32 = True) plot_syn_images([image])
When the new_G is the output generator from PTI.
Hope this helps, Daniel
works! Thank you
amazing work. I would like to know if it is possible to generate random faces with a seed like nvidia's stylegan. Try to do this, but the generated messages are full of artifacts.
new_G.synthesis(torch.from_numpy(np.random.rand(1,18,512)).float().to("cuda"),noise_mode='const')