I'm trying to reconstruct point clouds from the depth information rendered by GGUI.
However, I'm having trouble reversing the z-buffering operations that are done by the rendering backend.
I have tried several methods but the reconstructed real distances from the camera's eye have been unrealistic.
Does anyone know how exactly the depth provided by the GGUI render is calculated?
Thanks a lot!
Hi,
I'm trying to reconstruct point clouds from the depth information rendered by GGUI. However, I'm having trouble reversing the z-buffering operations that are done by the rendering backend. I have tried several methods but the reconstructed real distances from the camera's eye have been unrealistic. Does anyone know how exactly the depth provided by the GGUI render is calculated? Thanks a lot!