Closed icoderaven closed 4 years ago
Also, the ray direction should be normalized within the rayCast() method.
Hi icoderaven,
I've just pushed commit https://github.com/NVIDIA/gvdb-voxels/commit/9ffc6b70adfad54d0c32589df1c4b1177a781947, which should fix this by adding a new function, getRayDepthBufferMax
, to cuda_gvdb_raycast.cuh
. Essentially, this function computes getLinearDepth(SCN_DBUF)/dot(scn.dir_vec, dir)
, except it turns out we can use the transformation matrix to avoid having to store dir_vec
. However, I might add something like dir_vec
in the future to avoid the matrix-vector multiplication in getRayDepthBufferMax
as an optimization.
Thanks!
The current depth buffer test compares the linear depth for a particular pixel with the raybox intersection distance along the ray.
This is incorrect, and should be comparing the linear depth against the distance along the ray projected to the camera z axis (assuming that scene depth buffer is equivalent to a depth image transformed by the projection matrix to normalized 0..1 values)
i.e., instead of
it should be something along the lines of
where for convenience I have exposed the camera z in world frame as
scn.dir_vec
Of possible relevance to @drmateo and #44