MoyGcc / vid2avatar

Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition (CVPR2023)
https://moygcc.github.io/vid2avatar/
MIT License
1.25k stars 104 forks source link

why convert the coordinate system to opengl? #31

Closed boringwar closed 1 year ago

boringwar commented 1 year ago

And if I need to rotate the camera around the human to generate a 360-degree cam trajectory, how should I transform the current extrinsic parameter, do you have any suggestion or reference code?

MoyGcc commented 1 year ago

We tried to supervise our reconstruction at the very beginning using normal estimation from for example PIFuHD and the normal is rendered in OpenGL system. We didn't change that afterward.

To get a 360-degree cam traj., you could try the following code to transform the current camera pose which you could refer here by giving the desired rotation angle around y-axis.

def get_new_cam_pose_fvr(pose, rotation_angle_y):
    rot = scipy_R.from_euler('y', rotation_angle_y, degrees=True).as_matrix()
    R, C = pose[:3, :3], pose[:3, 3]
    T = -R@C
    temp_P = np.eye(4, dtype=np.float32)
    temp_P[:3,:3] = R
    temp_P[:3, 3] = T
    transform = np.eye(4)
    transform[:3,:3] = rot
    final_P = temp_P @ transform

    new_pose = np.eye(4, dtype=np.float32)
    new_pose[:3, :3] = final_P[:3, :3] 
    new_pose[:3, 3] = -np.linalg.inv(final_P[:3, :3]) @ final_P[:3, 3]
    return new_pose
AreChen commented 1 year ago

mark

boringwar commented 1 year ago

Thanks for your help @MoyGcc . From the code, the training epoch is 800 (Am I wrong?), and we sample 512 points from each image, for higher resolution (like 2k, 4k), how many points do we need to sample to get reasonable results? Will increasing the number of point samples help the geometry and texture details?

MoyGcc commented 1 year ago

I haven't tried yet on higher resolution videos but I think the current number of sampled points is still okay in that case and doubling the sample points will lead to a higher memory requirement (24GB may not be sufficient anymore) but may help a bit in the final results.