Open zhoutianyang2002 opened 3 months ago
Thanks. It's a mistake.
It is assumed S=1 in the paper. The code is right.
The second torch.tanh() should be deleted.
Just because face_vertices_camera[:, :, :, 1] not used later.
I only calculate loss for the pixels where visible > 0. Mask is used to supervise the mesh geometry.
- Thanks. It's a mistake.
- It is assumed S=1 in the paper. The code is right.
- The second torch.tanh() should be deleted.
- Just because face_vertices_camera[:, :, :, 1] not used later.
- I only calculate loss for the pixels where visible > 0. Mask is used to supervise the mesh geometry.
Thank you very much for your reply! May I ask another question? Since we already calculate the deformation of vertices by pose_deform_mlp using the pose as input, why we need to transform the vertices from canonical space to pose space? In other words, what's the difference between the offset calculated by pose_deform_mlp and the transformation of pose as the code below? Thank you very much!
# in MeshHeadModule.py
if 'pose' in data:
R = so3_exponential_map(data['pose'][:, :3]) # (1,3,3)
T = data['pose'][:, None, 3:] # (1,1,3)
S = data['scale'][:, :, None] # (1,1,1)
verts_batch = torch.bmm(verts_batch * S, R.permute(0, 2, 1)) + T
pose_deform_mlp predicts the offsets of the non-face points in canonical space.
pose_deform_mlp predicts the offsets of the non-face points in canonical space.
I understand now. Thank you very much! Best wishes!
Hi! Thank you for your excellent work! When I reading your code to learn how to implement a 3DGS experiment, I found some possible bugs:
Besides, may I ask two questions about the code?
Sorry to bother you. Thank you very much!