Closed lexaknyazev closed 2 years ago
I think it's ambiguous: both GL_DEPTH_COMPONENT24
and GL_DEPTH_COMPONENT32
have the same range: 0.0 to 1.0 because they are normalized.
Vulkan spec:
data copied to or from the depth aspect of a
VK_FORMAT_X8_D24_UNORM_PACK32
orVK_FORMAT_D24_UNORM_S8_UINT
format is packed with one 32-bit word per texel with the D24 value in the LSBs of the word, and undefined values in the eight MSBs.
This means that the texel value of 0x00FFFFFF
should map to 1.0
.
I tried uploading it to an OpenGL texture attached to FB's depth attachment with
const uint32_t maxValue24 = 0x00FFFFFF;
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 1, 1, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, &maxValue24);
Then, by enabling depth test and alternating between GL_GEQUAL
and GL_LEQUAL
funcs, the effective fragment Z value seems to be around -0.9921875
, which accurately translates to 0x00FFFFFF/0xFFFFFFFF
.
Then, by enabling depth test and alternating between GL_GEQUAL and GL_LEQUAL funcs, the effective fragment Z value seems to be around -0.9921875, which accurately translates to 0x00FFFFFF/0xFFFFFFFF
I'm not understanding this. 0x00FFFFFF/0xFFFFFFFF = 0.003906...
Fragment Z value range is -1.0 .. 1.0
, depth buffer range is 0.0 .. 1.0
, so 0.00390625 == (-0.9921875 + 1.0) * 0.5
.
Fragment Z value range is
-1.0 .. 1.0
, depth buffer range is0.0 .. 1.0
, so0.00390625 == (-0.9921875 + 1.0) * 0.5
.
Ahh! Thanks.
Actually if I'm understanding the OpenGL 4.6 spec. correctly, per section 8.5, it clamps the input value to the representable range of the
internalformat
Since we've specified thatVK_FORMAT_X8_D24_UNORM_PACK32
has the D24 bits in the LSBs the result ofinternalformat = GL_DEPTH_COMPONENT24
andtype = GL_UNSIGNED_INT
should be correct. There should not be any rescaling.