Closed sergiosolorzano closed 3 years ago
Hey Sergio,
Thank you for your interests in depth lab.
As for dense depth texture, I am referring to processing RGB and depth on the fragment shader directly. See l173 and l180 in https://github.com/googlesamples/arcore-depth-lab/blob/master/Assets/ARRealismDemos/DepthEffects/Shaders/DepthOfFieldCore.glslinc#L173, given a UV coordinate, it's using both depth and color on the GPU, though the depth is bilaterally filtered on the hardware. One can also apply antialiasing such as https://github.com/googlesamples/arcore-depth-lab/blob/master/Assets/ARRealismDemos/Common/Shaders/DGAA.cginc or apply TAA to future reduce temporal artifacts.
Cheers, Ruofei
I am going to close this ticket but please let me know if you still need help. We also released updates today for easier access to the depth texture with AR Foundation, and add more examples in the arcore_unity_sdk
branch: https://github.com/googlesamples/arcore-depth-lab/releases
Hello
I read in this publication that "a dense depth texture created by arcore sdk in the gpu has every pixel in the color camera image with a depth value mapped to it" (page 5).
How can I get the dense depth texture and its depth values? This would help me get the depth value for each pixel in the high resolution camera background rgb texture.
I am guessing pairing the depth value for each pixel in the camera background rgb texture can be hard because the depth texture has very low resolution (160x120) versus the camera background rgb texture resolution (2560x1440 for pixel2xl), i.e. not 1:1 when comparing pixels
Thank you for your help, Sergio