USTC3DV / NeRFBlendShape-code

212 stars 18 forks source link

about the computational efficiency of updating the density grid #30

Closed flyingshan closed 1 year ago

flyingshan commented 1 year ago

您好,

关于density grid的计算效率,有个问题想要请教一下: 以我目前对Expression-Aware Density Grid Update的理解, 我把这个过程写成如下的伪代码,逻辑是顺序地对于表情系数的每一维取最大值,以及平均脸系数设为1,这样调用nerf的query_density就能得到论文中式(9),h^i这个hashgrid对应的density,遍历表情系数所有维度后,对所有的density grid取element-wise maximum就能得到最终density grid。 我想问一下这样更新density grid的逻辑是否正确?以及这样做的话会在每次更新density grid的时候需要请求3DMM_expression_coef_dim次nerf的get_density函数,这样计算效率会不会很低呢?

tmp_grid_max_expr = torch.zeros_like(self.density_grid)
for i in range(1, 3DMM_expression_coef_dim + 1): # start from 1 because index 0 denotes the mean face
    rays_expr = (tensor, whose shape is [1, 3DMM_expression_coef_dim])
    # assume dim 0 of the expr_max denotes the mean face, and its value is always 1.
    rays_expr[:, 0] = 1
    rays_expr[:, i] = expr_max[:, i]
    tmp_grid = torch.zeros_like(densiy_grid)

    for xysz in all_points_from_density_grid:
        # query density conditioning on the expression coefficients
        sigmas = query_density(xysz, rays_expr)
        # update current density grid
        tmp_grid[xyzs] = sigmas
    # take the element-wise maximum of all density grids to get the final density grid
    tmp_grid_max_expr = torch.maximum(tmp_grid_max_expr, tmp_grid)
XuanGhahahaha commented 1 year ago

您好,

大概看了一下这个伪码,应该是正确的。计算一次density grid update的计算量是比较大的,但不用每个训练step都做这样的更新,您可以看一下这个issue #23