Closed JiatengLiu closed 8 months ago
Hi, it is great to hear that. We encode the 3D human with 3D Gaussians in a canonical space, and then convert the 3D Gaussians from the canonical space to the target space to perform the optimization.
Thank you for your reply @skhu101 . But I still have a question: the 3D Gaussian distribution in different poses is different, and the optimization process is carried out after converting 3D Gaussians to the target pose, which is like a static Gaussian to construct a dynamic scene, do you find this feasible? And can you tell me which file you implemented this in the project? Best wish
Hi, very good question. 3D human has specific structures, so we can articulate a static Gaussian to different target poses through Linear Blending Skinning (LBS) transformation. The implementation code is in line 69 of gaussian_renderer/init.py. This idea is also explored in previous HumanNeRF methods.
Sorry for taking so long to get back to you. I get what you mean: by transforming Gaussian of canonical pose to the target pose and then optimizing it. I understand correctly?
Yes, you are right.
hello! I've successfully reproduced your project, but I'm not sure about one detail. Do you convert the SMPL vertices in the target space to the canonical space and then perform Gaussian optimization?