GAP-LAB-CUHK-SZ / Get3DHuman

63 stars 7 forks source link

Confusion about input #4

Open BukuBukuChagma opened 9 months ago

BukuBukuChagma commented 9 months ago

If I got this right, If I need a color mesh for lets say my own rgb image, I first need to convert it to a 3d mesh using some other available models such as PiFuHD or EVA3D etc and then input that mesh into this using the retexturing weights? Is this correct, or am I missing something.

X-zhangyang commented 9 months ago

The input of Get3DHuman are shape and texture latent codes sampling from Guassian distribution. The target of Get3DHuman is to generate 3D textured mesh represented by implicit features and fixed pifu decoder, not to image.