MrTornado24 / Next3D

[CVPR 2023 Highlight] Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars
https://mrtornado24.github.io/Next3D/
470 stars 29 forks source link

How can I perform 3D-Aware Domain Adaption and change input as custom image? #9

Open hyojk2001 opened 1 year ago

hyojk2001 commented 1 year ago

At first, Thank you for this excellent services.

I want to cartoonized 3D view from seed image. so please tell me how to adapt cartoonized image.

And second, I think, seeds in your code means generating random image. but can I change this part to input my customed images rather than randomly generated image?

Thank you!

MrTornado24 commented 1 year ago

Hi, thanks for your interest! For the first question, we will integrate 3D cartoonlization very soon. For the second, sure, you can drive your custumed images as we show in demos. You can first perform inversion to map the input image into the latent space of our model and then run the reenact scripts.

hyojk2001 commented 1 year ago

Hi, thanks for your interest! For the first question, we will integrate 3D cartoonlization very soon. For the second, sure, you can drive your custumed images as we show in demos. You can first perform inversion to map the input image into the latent space of our model and then run the reenact scripts.

Thank you for answering! if I input image into the latent space of model, then which part of modules should I change?

# Generate images.
for seed_idx, seed in enumerate(seeds):
    print('Generating image for seed %d (%d/%d) ...' % (seed, seed_idx, len(seeds)))
    z = torch.from_numpy(np.random.RandomState(seed).randn(1, G.z_dim)).to(device)   <---------------

I think this part has to be changed. right? but it consists of 512 tensors in that variable and is normalized.

ZhouFangru commented 1 year ago

Hi,have you solved this problem?

aurelianocyp commented 7 months ago

+1 how to map a specific image to the latent code z