Closed yf1019 closed 2 years ago
Hi sidan, A very nice work. Here is a question about the code of rendering a 360 degree of the human. I'm very interested about it but I could not understand it. Could anyone share me how does this code work?
https://github.com/zju3dv/animatable_nerf/blob/5cb948815007a590b49b5611a710b0fb14cf4c79/lib/utils/render_utils.py#L61
def gen_path(RT, center=None): lower_row = np.array([[0., 0., 0., 1.]])
# transfer RT to camera_to_world matrix RT = np.array(RT) RT[:] = np.linalg.inv(RT[:]) RT = np.concatenate([RT[:, :, 1:2], RT[:, :, 0:1], -RT[:, :, 2:3], RT[:, :, 3:4]], 2) up = normalize(RT[:, :3, 0].sum(0)) # average up vector z = normalize(RT[0, :3, 2]) vec1 = normalize(np.cross(z, up)) vec2 = normalize(np.cross(up, vec1)) z_off = 0 if center is None: center = RT[:, :3, 3].mean(0) z_off = 1.3 c2w = np.stack([up, vec1, vec2, center], 1) # get radii for spiral path tt = ptstocam(RT[:, :3, 3], c2w).T rads = np.percentile(np.abs(tt), 80, -1) rads = rads * 1.3 rads = np.array(list(rads) + [1.]) render_w2c = [] for theta in np.linspace(0., 2 * np.pi, cfg.render_views + 1)[:-1]: # camera position cam_pos = np.array([0, np.sin(theta), np.cos(theta), 1] * rads) cam_pos_world = np.dot(c2w[:3, :4], cam_pos) # z axis z = normalize(cam_pos_world - np.dot(c2w[:3, :4], np.array([z_off, 0, 0, 1.]))) # vector -> 3x4 matrix (camera_to_world) mat = viewmatrix(z, up, cam_pos_world) mat = np.concatenate([mat[:, 1:2], mat[:, 0:1], -mat[:, 2:3], mat[:, 3:4]], 1) mat = np.concatenate([mat, lower_row], 0) mat = np.linalg.inv(mat) render_w2c.append(mat) return render_w2c
https://github.com/zju3dv/neuralbody/issues/88
Hi sidan, A very nice work. Here is a question about the code of rendering a 360 degree of the human. I'm very interested about it but I could not understand it. Could anyone share me how does this code work?
https://github.com/zju3dv/animatable_nerf/blob/5cb948815007a590b49b5611a710b0fb14cf4c79/lib/utils/render_utils.py#L61
def gen_path(RT, center=None): lower_row = np.array([[0., 0., 0., 1.]])