wenbin-lin / RelightableAvatar

Relightable and Animatable Neural Avatars from Videos (AAAI 2024)
https://wenbin-lin.github.io/RelightableAvatar-page/
Apache License 2.0
15 stars 3 forks source link

question in 'lib/networks/render/mat_render.py' #3

Closed JiatengLiu closed 1 month ago

JiatengLiu commented 6 months ago

There is a method called 'get_mesh_v_tbw' in the 'lib/networks/render/mat_render.py'. I don't understand this method. It seems to transform coordinates between canonical space and observation space again and again and I am refused. Can you explain it? By the way, Is the physics-based rendering process in line 355 of file? color_sg_ret = self.color_sg_network(wpts, gradients, viewdir, posed_pts, j_transform, poses, rot_w2big)

wenbin-lin commented 6 months ago

The 'get_mesh_v_tbw' function is used to compute the skinning weights of the explicit mesh. The mesh skinning weights are computed in the canonical space with the neural non-rigid deformation applied. Then we transform the explicit mesh to the world coordinates, so we can use the skinning weights of mesh vertices to compute the inverse skinning weights of ray points in the world coordinates.

Yes, the physics-based rendering process is done in the ColorSGNetwork module.