Open PMPBinZhang opened 1 year ago
Hi, thanks for your interest in our project.
Sure! Our method is basically a 3D GAN inversion method, any 3D GAN (trained on real or anime images) can be used for inversion.
For some 3D anime GAN methods, you can check this work DATID-3D
Hi, thanks for your interest in our project.
Sure! Our method is basically a 3D GAN inversion method, any 3D GAN (trained on real or anime images) can be used for inversion.
For some 3D anime GAN methods, you can check this work DATID-3D
thanks for your reply, which shell should i run, I run demo_view_synthesis.sh, but i got a wrong result, input image is , i got the mid data is , and the result is
can i get a 3D model like the input image,what does the second image in mid data mean, how to get this result.
Hi, since this method is trained on FFHQ (real world face dataset), the output should always be real person (the second image in the mid data means the 3D 64X64 low resolution inversion result, and the right most means the final result).
I think to meet your need, first you need a 3D GAN (e.g., finetune the pre-trained FFHQ StyleSDF GAN with your own anime images or training from scratch) and re-run (or finetune) this method.
By the way, I have previously finetuned the FFHQ 3D GAN on anime data and leave the encoder fixed to support real image -> 3D anime. You can check this script [https://github.com/NIRVANALAN/E3DGE/blob/main/scripts/test/demo_toonify.sh] or the colab demo. The one more step you need here is to further finetune the encoder on the anime GAN to support anime -> 3D anime.
hello,can this model directly construct anime pictures?