In raymarching\src\raymarching.cu,
the function kernel_composite_rays_train_forward use the sum of deltas[1] to get the depth.
And the deltas[1] is calculated in kernel_march_rays_train, which is deltas[1] = t - last_t.
So the range of depths from kernel_composite_rays_train_forward should be [0, far-near].
But in nerf\renderer.py, the code is
weights_sum, depth, image = raymarching.composite_rays_train(sigmas, rgbs, deltas, rays, T_thresh)depth = torch.clamp(depth - nears, min=0) / (fars - nears),
which seems to get the wrong range finally, because after minus nears, the range becomes *[-near, far-2near]**.
Is this an error or do I get it wrong somewhere? Thanks!
In raymarching\src\raymarching.cu, the function
kernel_composite_rays_train_forward
use the sum ofdeltas[1]
to get the depth. And thedeltas[1]
is calculated inkernel_march_rays_train
, which isdeltas[1] = t - last_t
. So the range of depths fromkernel_composite_rays_train_forward
should be [0, far-near]. But in nerf\renderer.py, the code isweights_sum, depth, image = raymarching.composite_rays_train(sigmas, rgbs, deltas, rays, T_thresh)
depth = torch.clamp(depth - nears, min=0) / (fars - nears)
, which seems to get the wrong range finally, because after minusnears
, the range becomes *[-near, far-2near]**. Is this an error or do I get it wrong somewhere? Thanks!