Open BukuBukuChagma opened 9 months ago
The input of Get3DHuman are shape and texture latent codes sampling from Guassian distribution. The target of Get3DHuman is to generate 3D textured mesh represented by implicit features and fixed pifu decoder, not to image.
If I got this right, If I need a color mesh for lets say my own rgb image, I first need to convert it to a 3d mesh using some other available models such as PiFuHD or EVA3D etc and then input that mesh into this using the retexturing weights? Is this correct, or am I missing something.