NVIDIA / gvdb-voxels

Sparse volume compute and rendering on NVIDIA GPUs
Other
673 stars 144 forks source link

Incorrect depth buffer check in depth buffer test #48

Closed icoderaven closed 4 years ago

icoderaven commented 5 years ago

The current depth buffer test compares the linear depth for a particular pixel with the raybox intersection distance along the ray.

This is incorrect, and should be comparing the linear depth against the distance along the ray projected to the camera z axis (assuming that scene depth buffer is equivalent to a depth image transformed by the projection matrix to normalized 0..1 values)

i.e., instead of

if (t.x > getLinearDepth(SCN_DBUF) )

it should be something along the lines of

if (t.x * dot(scn.dir_vec, dir) > getLinearDepth(SCN_DBUF) )

where for convenience I have exposed the camera z in world frame as scn.dir_vec Of possible relevance to @drmateo and #44

icoderaven commented 5 years ago

Also, the ray direction should be normalized within the rayCast() method.

NBickford-NV commented 4 years ago

Hi icoderaven,

I've just pushed commit https://github.com/NVIDIA/gvdb-voxels/commit/9ffc6b70adfad54d0c32589df1c4b1177a781947, which should fix this by adding a new function, getRayDepthBufferMax, to cuda_gvdb_raycast.cuh. Essentially, this function computes getLinearDepth(SCN_DBUF)/dot(scn.dir_vec, dir), except it turns out we can use the transformation matrix to avoid having to store dir_vec. However, I might add something like dir_vec in the future to avoid the matrix-vector multiplication in getRayDepthBufferMax as an optimization.

Thanks!