Open hyojk2001 opened 1 year ago
Hi, thanks for your interest! For the first question, we will integrate 3D cartoonlization very soon. For the second, sure, you can drive your custumed images as we show in demos. You can first perform inversion to map the input image into the latent space of our model and then run the reenact scripts.
Hi, thanks for your interest! For the first question, we will integrate 3D cartoonlization very soon. For the second, sure, you can drive your custumed images as we show in demos. You can first perform inversion to map the input image into the latent space of our model and then run the reenact scripts.
Thank you for answering! if I input image into the latent space of model, then which part of modules should I change?
# Generate images.
for seed_idx, seed in enumerate(seeds):
print('Generating image for seed %d (%d/%d) ...' % (seed, seed_idx, len(seeds)))
z = torch.from_numpy(np.random.RandomState(seed).randn(1, G.z_dim)).to(device) <---------------
I think this part has to be changed. right? but it consists of 512 tensors in that variable and is normalized.
Hi,have you solved this problem?
+1 how to map a specific image to the latent code z
At first, Thank you for this excellent services.
I want to cartoonized 3D view from seed image. so please tell me how to adapt cartoonized image.
And second, I think, seeds in your code means generating random image. but can I change this part to input my customed images rather than randomly generated image?
Thank you!