Closed toji closed 4 years ago
I'm feeling best about #4, with two options to be requested -- Either 16-bit HDR based values, or 8-bit SRGB space values.
The Firefox implementations will be performing post-processing (temporal and spatial) on the textures returned by the OS to reduce profiling and fingerprinting. (eg, avoiding side-channel attacks via timing of quickly changing brightness values). I would imagine other browsers may need to also process this data, costing at least one blit into the cubemap per update. This isn't expected to happen every frame; however, so hopefully would not be as impactful on performance.
I do imagine that many sites will not wish to implement an HDR workflow, represented by 16-bit values, but rather like a simple reflection cube in SRGB space, even if the extensions for the 16F values are present.
I'm not so certain if the browser should provide information about the underlying swizzling / RGB order. If we are requiring a blit anyways to filter the data, I would expect the browser to also conform the components to a consistent order (eg, RGB vs BGR...).
If we went with #4, providing either 8-bit SRGB or 16-bit HDR, it would leave HDR as an option only possible with WebGL 2 or the required WebGL 1 extensions for GL_RGBA16F. We should evaluate if an 8-bit HDR option has value.
Okay, I can get onboard with that. Would want to get some feedback from Apple on it, though, since I suspect the conversion process may be more painful for ARKit than with the way ARCore surfaces it's data.
Sketching out possible IDL:
// Not thrilled with these names, but reluctant to use GL enums since that would imply
// you could pass in any GL format.
enum XRReflectionCubeMapFormat {
"srgb8",
"hdr16f",
};
partial interface XRWebGLBinding {
WebGLTexture? getReflectionCubeMap(XRLightProbe lightProbe, XRReflectionCubeMapFormat format = "srgb8");
// Necessary?
XRReflectionCubeMapFormat preferredReflectionCubeMapFormat;
};
This IDL is looking good to me. I'm also interested in Apple's take.
With [ARCore on Android](https://developers.google.com/ar/reference/java/arcore/reference/com/google/ar/core/LightEstimate#acquireEnvironmentalHdrCubeMap()) the internal format of the reflection cube maps returned by the API is arrays of
GL_RGBA16F
values that the app is then expected to upload to a texture. On Apple's ARKit the internal format is listed asbgra8Unorm_srgb
and already delivered in texture form.How should this disparity be handled by the API? Largely WebGL can sample from either texture type without knowing the internals, but I can imagine it would make a difference in how developers would want to process the values in their shaders. Additionally, some formats (GL_RGBA16F) won't be available with WebGL 1 without several extensions enabled. Given that, it seems like we have a few options for how to approach this: