Closed lexaknyazev closed 3 years ago
A couple more implementation-driven points here:
GL_SRGB8_ALPHA8
format will lose extended precision but will retain correct colorspace (we need sample models to test this across ecosystem).Proposal: Keep the spec as it is, write a note / tutorial / best practice about loading 16-bit textures.
I'm in the process of evaluating WebGL 2.0 caps on this matter.
/cc @pjcozzi @bghgary @javagl
The AntiqueCamera model that was added recently uses 16-bit PNGs. Not sure if the base color textures are sRGB or not.
Babylon loads glTF jpg into gl.RGB and png into gl.RGBA regardless of bit depth and treats them the same.
@bghgary
Babylon loads glTF jpg into gl.RGB and png into gl.RGBA regardless of bit depth and treats them the same.
IIUC, here's Babylon's handling of base color: https://github.com/BabylonJS/Babylon.js/blob/22bddc31191843ae704e103a23abb3d83cef54dc/src/Shaders/pbr.fragment.fx#L324-L332
Looks like sRGB-to-linear transform is being done after texture filtering, which is not fully accurate. When the context is WebGL 2.0 or when EXT_sRGB
extension is present, it's usually possible to offload this to GPU's texture filtering unit while maintaining the correct order of operations.
Regardless, extra precision that 16-bit textures can provide is lost.
EXT_sRGB
Yeah, we should probably do this properly in WebGL2 or when EXT_sRGB is available and fallback when they are not. I'm not sure how noticeable it is though. It will make perf better, so we will look into supporting this.
extra precision that 16-bit textures can provide is lost.
I will venture to say that for the textures we have in glTF 2.0, 16-bit textures are not going to gain much visual fidelity. If this is true, perhaps we should discourage the use of 16-bit textures in 2.0?
Somewhat related: https://github.com/KhronosGroup/WebGL/issues/2810.
Is the spec. conflating color primaries and the transfer function and is therefore fundamentally flawed?
With 16-bits per channel there is absolutely no benefit from and no need to use the sRGB transfer function. The purpose of the transfer function is to compress the data to fit an 8-bit channel by removing differences humans can't perceive. It is perfectly possible to blend 16-bit linear textures and 8-bit sRGB textures without affecting what I perceive is being referred to as the "color space" here because blending is (should be) done in linear and GL with sRGB support handles all that.
Note that GL sRGB support only deals with the transfer function and that is because of the need to convert to linear prior to sampling and blending.
If glTF also requires sRGB primaries for baseColor and "certain" other textures, 16-bit channels can use those primaries though you will likely lose any HDR benefit of the 16 bits. You might want to look into scRGB primaries.
With 8-bit baseColor textures, it's simple: sRGB transfer function and sRGB primaries (aka BT.709) - this is what glTF means by sRGB colorspace. Other textures (such as normal, metallic, occlusion) do not contain color data, so texel values are treated as linear and "primaries" have no meaning.
It's hard to straightforwardly support 16-bit color data because web browsers don't expose bit depth of decoded images. So, glTF loaders have to assume that all color textures use sRGB (because almost all of them are 8-bit).
I'm looking at handling importing and exporting 16-bit PNG's from Adobe Dimension right now so I'd also like to resolve this issue. Should I be assuming an sRGB colour space on import? Looks like the Antique Camera is linear, no? BTW, I'm totally fine with just disallowing 16-bit PNG's from glTF. I don't really see the value except maybe for normal maps.
I think I'm OK disallowing 16-bit. If someone needs it, they can make a 16-bit PNG extension, similar to the DDS and WEBP extensions. The presence of such an extension would indicate that the image was 16-bit linear (or sRGB if that's even a thing in 16-bit land), so engines wouldn't need to parse the image headers to find out. We wouldn't make such an extension immediately, but that option can be an escape hatch if we later find some user who is unhappy with the removal of 16-bit.
The Antique Camera sample model can just be downgraded to 8bit sRGB, I wouldn't expect it to look noticeably different (but I haven't tried yet).
I think we can safely allow 16-bit sources for images expected to be linearly-encoded. They should already work properly on the web.
After merging #1606, we'll be able to address this issue.
Also, the proposal for exposing EXT_texture_unorm16
to WebGL has landed.
I think that we should disallow 16-bit PNG images for color-related textures because of ambiguous pipeline/runtime behavior. An extension allowing them could be easily defined if needed.
Using 16-bit PNGs with non-color textures (ORM and normal maps) should be fine and some runtime implementations can enjoy increased precision.
@emackey @MiiBond @bghgary @zellski @donmccurdy Are you OK with this?
/cc @cdata @prideout Are you guys OK with removing 16-bit PNG support from glTF 2.0? I think we heard on a call that @cdata ran into this issue while testing the model-viewer tag.
I disagree, with dropping 16-bit color support --- high-dyn. range images are coming and superior. Most data today , will be processed (internally) as float32 or float64 anyway. Linear encoding could be considered as default, as long as no hint for Log. encoding is found / given.
for height-map, bump-map, normal-map, -- 16bit/32bit is imporant.
also one might want to include an HDR (background) image in the gltf ... using 24bit/32bit. logarithmic scale gives better range, visual appearance, is closer to how real eye and physics work.
the publisher could just give an indication, of used color format (eg. 16bit - linear or 24bit log. + bitorder) most viewing tool should be able to convert (supported by most image libs) in case where necessary.
I believe this restriction covers only the core glTF 2.0 textures, and glTF extensions are still free to define their own image formats, including full HDR formats (which go well beyond what 16-bit PNG can do).
Height maps and bump maps are not in glTF 2.0 core, only normal maps. Normal maps don't need 16-bit because they're normalized. An extension could be added for high-precision height maps / displacement maps at some future point. Likewise, HDR IBL is being specified as an extension with a high-precision format.
The question here is, do Base Color, Metallic, Rough, Occlusion, and Emissive need 16-bit PNG support?
Metallic and Occlusion are easy to rule out (Real-world materials are typically metallic or non-metallic, so the fractional values are only used at the fringes, and Occlusion is typically has some noise in it even at 8-bit, so wouldn't benefit from 16-bit). I could maybe see a case for roughness wanting the 16-bit values. And as for emission, we have a situation where a 1.0 max value clamp snuck into the 2.0 schema and can't be removed, so, we need a new extension to allow HDR emission images with brighter-than-1.0 light level support. Simply allowing 16-bit PNGs on the emission map isn't going to fix that problem.
The only one I didn't cover is Base Color. I don't really have a strong opinion on 8 vs 16-bit base colors.
Overall I think that 16-bit PNGs don't cover cases where full HDR is needed, and we should drop 16-bit from core glTF in favor of extensions that supply full HDR where applicable.
this restriction covers only the core glTF 2.0 textures
That's correct.
Normal maps don't need 16-bit because they're normalized.
I think they still can benefit from having more than 256 values for each axis. The issue is that the spec explicitly ties floating point values to 8-bit integers.
Occlusion is typically has some noise in it even at 8-bit
Wouldn't occlusion have better gradients with 16 bits?
Simply allowing 16-bit PNGs on the emission map isn't going to fix that problem.
Exactly. The difference between 8-bit sRGB and 16-bit linear is barely noticeable unless a wider gamut is used.
I don't really have a strong opinion on 8 vs 16-bit base colors.
The same issue as with emission maps.
Wouldn't occlusion have better gradients with 16 bits?
In theory, yes, but only if the software computing the occlusion map takes the time and fires enough rays to actually get to that level of precision, which seem uncommon in practice. Many occlusion maps in the wild today have a fair amount of what looks like dithering on them, which is noise from the path tracer not tracing enough paths by default. Power users do know how to overcome this, and spend more time generating an AO map, at which point yes, 16-bit could become useful to store the result. But AO is a very subtle effect, ignored by direct light sources. It would be difficult, I think, to make a case where 16-bit occlusion looks noticeably different from 8-bit, when viewed as part of a full PBR material.
if GLTF shall be used as honest exchange format (not just web-viewing) ... it must be able to at least include full bit versions (user encoded) of the textures, if not for display then for transmission and editing.
What about somes checkboxes: [x]include original textures, [x] use separted images for value-channels [x] reference existing images (do not (re-)pack into gltf-file the tex)
maybe keep it simple: have a view_tex folder and a user_tex folder. generated images go to view_tex folder. each can be zipped (when requested) and embedded on-demand in the gltf (when requested). or the gltf keeps just relative references to the folder files.
Sorry for coming back to you so late on this, but to answer the original
question: we ran into an issue where an out-of-date dependency of Filament
caused 16-bit PNGs (as used in one of the Khronos sample models) to fail to
load in one of their sample apps. This issue was easily fixed by updating
the related dependency. AFAIK there are no outstanding issues related to
16-bit PNGs for
On Fri, May 24, 2019 at 1:30 PM emrum notifications@github.com wrote:
maybe keep it simple: have a view_tex folder and a user_tex folder. generated images got to view_tex folder. both are zipped (when requested) and embedded on-demand in the gltf (when requested). or the gltf keeps just relative references to the folder files.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/KhronosGroup/glTF/issues/1561?email_source=notifications&email_token=AAB2TU47NQQVWPDF2YVOSV3PXBF5NA5CNFSM4GXHEJL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWGP26I#issuecomment-495779193, or mute the thread https://github.com/notifications/unsubscribe-auth/AAB2TU7WWLANP73RZK5NN2DPXBF5NANCNFSM4GXHEJLQ .
The new spec draft includes expected use cases for 8- and 16-bit data.
The core glTF 2.0 spec allows PNG and JPEG formats for storing textures. PNG pixels can have 8- or 16-bit depth.
The glTF 2.0 spec explicitly says that certain textures (e.g. baseColor) must be in sRGB color-space. This requirement makes usage of 16-bit PNGs way too complicated. Namely:
This means that to get correct values and preserve extended bit depth with existing spec design, an engine must manually process such textures before using them in lighting equations (because there are no 16-bit normalized sRGB GPU formats). Moreover, it's not typical for content pipelines to use sRGB encoding with 16-bit depth.
Our options here: