facebookresearch / pytorch3d

PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
https://pytorch3d.org/
Other
8.81k stars 1.32k forks source link

Why is the depth rendering by pytorch3d different from blender? #1873

Open yejr0229 opened 1 month ago

yejr0229 commented 1 month ago

Hi, here is two images rendering by pytorch3d and blender, and the third is the difference between them:

企业微信截图_17267153005628

I'd like to how can I get a result more close to blender? And here is my code to render the depth:


def get_relative_depth_map(fragments, pad_value=pad_value):
    absolute_depth = fragments.zbuf[..., 0] # B, H, W
    no_depth = -1

    depth_min, depth_max = absolute_depth[absolute_depth != no_depth].min(), absolute_depth[absolute_depth != no_depth].max()
    target_min, target_max = 50, 255

    depth_value = absolute_depth[absolute_depth != no_depth]
    depth_value = depth_max - depth_value # reverse values

    depth_value /= (depth_max - depth_min)
    depth_value = depth_value * (target_max - target_min) + target_min

    relative_depth = absolute_depth.clone()
    relative_depth[absolute_depth != no_depth] = depth_value
    relative_depth[absolute_depth == no_depth] = pad_value # not completely black

    return relative_depth

depth_maps = get_relative_depth_map(fragments)
bottler commented 1 month ago

I have no idea. It looks a bit like the discrepancy increases as you move away from a certain point. Perhaps it is something like one of these is a Euclidean distance to the camera and one is a distance from the camera plane? Maybe you can manually calculate what do you think the distances should be to some special points?