KhronosGroup / glTF

glTF – Runtime 3D Asset Delivery
Other
7.16k stars 1.14k forks source link

glTF 2.0: New KHR_environments extension #946

Closed McNopper closed 4 years ago

McNopper commented 7 years ago

For PBR and IB lighting, an environment map is needed. The follwoing JSON snippet defines evironments by example:

{
  "environments": [
    {
      "environmentTexture": 0,
      "type": "sphere"
    },
    {
      "environmentTexture": 0,
      "type": "panorama"
    }
  ]
}

"type" defines, the format of the environment texture.

For non-PBR materials, the texture can be used just as the environment. For PBR, the texture needs to be sampled and pore-filtered. It needs to be discussed, if cube maps as textures should be supported. Furthermore, a standard HDR image format has to be selected. Finally, find out a possibility, to provide the pre-sampled/-filtered images as well.

javagl commented 7 years ago

Interesting. I thought the IBL would just be one form of a light that was about to be defined in the new lights extension, but considering that it does not necessarily contribute to the lighting, a dedicated environments extension probably makes sense.

The lack of support of real cube maps might either require some workarounds, or extensions in the texture/image/sampler area as well (but I'm not so deeply familiar with that).

In any case, one probably has to either

McNopper commented 7 years ago

The new light class makes sense. Probably only for PBR materials, but I prefer the reference. As you mentioned, I have put the environment in a separate structure, as it can be used for other features as well e.g. background for common materials.

emackey commented 7 years ago

To make sure I'm understanding this, the general case would still be that we expect PBR models to pick up environments from the rendering engine, not ship them with the glTF file itself, right?

Take for example the models in sbtron's glTF 2 demo. The same glTF file can be loaded into multiple different environments, and will reflect the selected environment without changes to the glTF itself.

hourglassenvironments_v2

McNopper commented 7 years ago

Yes, the general case would be to deploy without the environment map. And I see this extension as a "pure" extension like the PBR specular glossiness is right now.

To explain this extension a little bit further:

Normally, you deploy/use the glTF 2.0 file without any environment map. So the engine decides, which environment lighting is used. Like in the above images and glTF 2.0 is specified right now.

But I do see another use case: In the above engine - also in any other - somehow it is encoded, what kind of environment map is used: texture format (sphere, panorama, etc.) and probably also the tone mapping and so on.

So, if I want to send someone an asset and I want this person to see the 3D content exactly the way I want him/her to see the asset, you also need to send the environment map plus some additional information. For this reason, I also want to deploy the environment map inside the glTF - especially glb - file.

xelatihy commented 7 years ago

If this is added, can we add a transform matrix to orient the envmap, and a color to scale it?

McNopper commented 7 years ago

Good idea. I suggest a 3x3 matrix and/or a rotation entry. It could be similar to the node, except that the translation and scale is removed. Regarding the color, this would be a strength value, to make the scene brighter etc.?

stevenvergenz commented 7 years ago

Are these environment maps intended to be used as reflection probes? If so, it might be useful to be able to associate an environment map with a node. In many game engines, a scene may have multiple reflection probes, and it uses the weighted relative distances to the different probes to choose one for a particular object. The specific algorithm would have to be implementation-dependent, but the data should be available.

UX3D-nopper commented 7 years ago

The original idea is having one static environment map, which influences the whole scene. I will suggest your proposal to the working group tomorrow.

pjcozzi commented 7 years ago

"type" defines, the format of the environment texture.

Are cubemaps most common? If so, should that be the only one supported to start or is that too limited compared to what shading tools will create? Could you give a brief rundown of each possibility for type?

pjcozzi commented 7 years ago

CC @moneimne, this discussion may be of interest to you.

moneimne commented 7 years ago

I definitely see a strong use case for this when displaying/previewing glTF models. I expect that when an artist creates an asset, they often want it to be displayed in a relevant environment. If the engine imposes its own environment map, it might end up that the model looks out-of-place or awkward because of context.

An extension like this might not be as useful when loading multiple glTF models into the same scene, though. Contradicting environment maps would detract from the realism that PBR aims to add. I suppose the engine could ignore the extension at this point.

McNopper commented 7 years ago

@pjcozzi Type should define, how the environment map is "encoded": http://spiralgraphics.biz/genetica/help/index.htm?environment_maps_explained.htm So they are:

I suggest, that we should only support the panorama and mirror ball format:

McNopper commented 7 years ago

@moneimne Regarding the environment maps included in the glTF scene, we should exactly define this in the specification e.g.: "If the asset has to be rendered like the artist wants it to be seen, please use the environment map included in the file. If the asset is composed with several other glTF assets, the included environment map can be ignored".

Also, in the last case, I would suggest to store the environment map in in a separate glTF file without any other scene data. This glTF file would still be a valid glTF file plus it has all the information about the environment map like type and additonal rotation.

xelatihy commented 7 years ago

Let me comment on the extension a bit.

  1. I think it would be great to have since most models are viewed under en envmap
  2. glTF can also store full scenes and envmaps are the best outdoor lighting there is – so they should be included
  3. transforms are needed on the envmap for reorientation
  4. I would include latlong projections since this is one of the most common format for envmap on the web, and since retrojecting an envmap is really hard to do right

One could also want to include envmap probes, i.e. envmaps that are local to a part of the scene. The problem with doing so is that it is very hard to define how to render them appropriately without some form of complex probe interpolation, typically done only on smoothed probes and using some form of angular basis for it. I would leave this out, unless there is clarity on a simple implementation that actually works.

pjcozzi commented 7 years ago

If the asset has to be rendered like the artist wants it to be seen, please use the environment map included in the file. If the asset is composed with several other glTF assets, the included environment map can be ignored"

How would an app know "If the asset has to be rendered like the artist wants?" It seems that one environment map would need to always override the other, e.g., "if the runtime has an environment map, the environment map in the extension may be ignored."

I suggest, that we should only support the panorama and mirror ball format:

  • No horizontal cross, as obsolete image data has to be transported.
  • No cube maps because
    • The environment map has to be sampled/filtered anyway, to have the final cube maps for IBL
    • We need to define, how these 6 sides/textures are encoded in glTF 2.0.
    • As far as I know, different graphics APIs expect the cube maps differently flipped and upside down.

Sounds like cube maps will be a lot of work; this is probably why we punted on them earlier. 😄 But are they widely used enough that the work is justified to "get this right?"

Any thoughts @lexaknyazev @bghgary?

McNopper commented 7 years ago

The app does not know. It is more depending on the context: E.g. in the future, if I double click a glTF 2.0 file where an enviornment map is included, the viewer is using the enviornment map. If no environment is present, a default one or none is used. If I am importing an glTF 2.0 asset into a game engine, it will probably ignore the environment map or ask me, if the environment map should be imported as well. But I think we should not specify this behaviour.

Having an environment map would also imply to support HDR images. Having cube maps, we also need to support more samplers. Basically easy to define, but I think it takes some time until all agree.

My suggestion is to put all extensions - except lighting and common materials - for now on hold. We should all the engine and content tools developer give time, to adapt to glTF 2.0. But as lighting and common material are important, focus just on these right now.

pjcozzi commented 7 years ago

My suggestion is to put all extensions - except lighting and common materials - for now on hold.

Sounds good.

stevenvergenz commented 7 years ago

Vendor extensions too? My two pending extension PRs are ready to merge and implemented.

On Fri, Jun 16, 2017, 9:22 AM Patrick Cozzi notifications@github.com wrote:

My suggestion is to put all extensions - except lighting and common materials - for now on hold.

Sounds good.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/KhronosGroup/glTF/issues/946#issuecomment-309070537, or mute the thread https://github.com/notifications/unsubscribe-auth/ABy5CDuqs-IbgH0pgtPpfaefkQ5WYOJxks5sEqvFgaJpZM4NTD3N .

--

Steven Vergenz, AltspaceVR steven@altvr.com

McNopper commented 7 years ago

No, no, just the offical "KHR_" ones.

xelatihy commented 7 years ago

For orienting the environment map, could we add an extension to the node. So environments would be treated like cameras and meshes.

The main advantage of this is that by having only one way to specify transforms in glTF we get an easier integration in libraries and support for all transformation features, like for example animation.

UX3D-nopper commented 7 years ago

Yes, I think your suggestion is the better approach. Also, it would be possible to have several environment maps, using IBL depending on the position of the actor. I will disucss this with the glTF working group on wednesday,

emilian0 commented 7 years ago

@McNopper Regarding environment map encoding. I am not convinced that we should support mirror balls. The reason is that they display heavy distortions (I believe Azimuthal equidistant corresponds to mirrorball). I understand their physical value (light/environment physical probes). But for glTF I believe we should pick an encoding that roughly gives the same importance(pixels) to each direction and introduces less distortions (so that the resulting images can be more efficiently transmitted / compressed). I think equirectangular projections are good (even if they introduce large distortions around the poles, and use less pixels to encode the equator). I think cube maps are a better option in terms of Tissot's indicatrix (I couldn't find a diagram though).

UX3D-nopper commented 7 years ago

I am fine without the support of mirror balls, as we use normally the equirectangular representation and I think others do not have a strong opinion on this. Also having cube maps does make sense of course, as they can be used 1:1 by the graphics API.

What we have to define is, how these HDR images are encoded: .hdr, .ktx, ???

.ktx does have the advantage, that it supports cube, mip maps and floating point textures. Also, it is a Khronos standard.

emilian0 commented 7 years ago

@McNopper we use equirectangular as well, that said I am leaning towards cube maps because of the lower distortions (on top of hardware support). I understand that there are a lot of decisions to make in case we go with cube maps, a colleague of mine pointed me to google 360 video standardization effort as a source of inspiration.

In terms of HDR encoding we mostly use .hdr internally (exr as well). Thanks for pointing me to .ktx: that is an in-memory format correct? Probably we can do better than that for transmission?

UX3D-nopper commented 7 years ago

I mean both should be possible - cube maps and equirectangular - as they are just common used.

KTX is from Khronos: https://www.khronos.org/opengles/sdk/tools/KTX/file_format_spec/ If e.g. compression is missing - but I think it is in - it could be extended :-)

fire commented 7 years ago

What advantages does KTX have over openexr? It's very difficult to get tooling created and openexr for example has tooling in Photoshop and many other tools.

Openexr is compressible.

Has support for 16bit and 32bit float.

Common interchange format for film production.

MiiBond commented 6 years ago

I have a few comments on this:

Texture Encoding

Are we only considering supporting .hdr or .exr images in glTF or do we want to talk about supporting schemes for encoding HDR in an already-supported web format like PNG?

I've implemented various ways of encoding HDR data in 8-bit PNG's in the past and I ended up settling on RGBE as it gave the best results with shader code (for decoding) that was still understandable (as opposed to something like LogLuv). For anyone not familiar with this encoding or the other common ones like RGBM, here's some info: http://lousodrome.net/blog/light/tag/rgbm/

However, these schemes all basically require the texture to be decoded into a floating-point texture in memory if you want to use linear interpolation, so they really only serve as a transmission format and not something that you would actively keep using in encoded form at runtime. RGBM doesn't suffer too badly from linear filtering but I've found its quality generally not sufficient.

Texture Layout

In my experience, you ultimately want to sample your environment map as a cube-map to avoid the discontinuity when using mipmaps. i.e. because the sampling at the boundary of the equirectangular texture isn't continuous (jumps from 0 to 1), artifacts are introduced. See the following screenshot from the three.js "webgl_materials_envmaps" example. The artifact that I'm referring to is the line running down the center of the reflection. It would be more visible with a brighter envmap. With mipmapping (note the vertical line in the center of the reflection) screen shot 2017-12-12 at 11 15 19 am Without mipmapping. screen shot 2017-12-12 at 11 23 24 am

Obviously, mipmapping is desired for environment reflections and so I always project my equirectangular maps into cubemaps before use.

I guess I'm not making any specification suggestions here. I don't think we absolutely need cubemap support in glTF, though it would be ideal. I'm perfectly content with rendering equirectangular maps into a cubemap before use at runtime. This also provides a chance to decode the RGBE data into a floating point cube-map.

Convolved Maps for PBR IBL

This is a trickier issue and I'd love to hear what other people do. To implement IBL nicely, we want to have properly convolved images for all levels of specular reflectance (based on roughness) as well as purely diffuse lighting.

We can either generate these maps offline and send them with the glTF or we can rely on the individual runtime to generate them using importance sampling. I haven't seen a runtime implementation that is practical for the web and achieves really nice quality but maybe future Javascript performance will solve that issue for us? Having the runtime be responsible for generating these maps is certainly the most flexible and makes defining a spec in glTF much simpler.

MiiBond commented 6 years ago

Hi guys. We'd like to start pushing this discussion forward as this extension (along with KHR_lights) is something that we need as soon as possible. I'd really like to hear what others think about this.

After talking to @emilian0 about the need to render using a cube map (to avoid the mipmap artifacts I mentioned above), he pointed out that, since the intention of glTF is to be a format that is as close to the runtime as possible, supporting cube maps in the format is important. It also allows authoring environments with as little distortion as possible.

And we'd also like to voice our support for combining this extension with KHR_lights, as mentioned by @javagl

space2 commented 6 years ago

Hi!

So I'm kinda an amateur/noob regarding glTF, but here are my opinions:

Regarding file formats I suggest KTX. I mean it's a Khronos file format, glTF is a Krhonos file format... it would be very weird if KTX would not be supported. Also, KTX is a very simple file format, close to OpenGL, anyone can write support for it in a few lines of code. In the worst case people could use converters.

Regarding the texture format: I always used lat/long (i.e. panorama) format in the past, mainly due to the simplicify (one single image, simple formula to do texture lookups). However cubemaps are better from quality point of view. I suggest to include both (and only these two) for start. This would cover 99% of the use cases, I think. Also note that KTX supports cubemaps, so we can have a single texture file describing the whole cubemap.

Regarding the lighting impact: I think this is where people will be divided. On one hand, it would make sense to use the env map for lighting as well. But what if the env map is LDR only? It would not work well anyway. Also there is a special case: what if the lighting is already baked into the material? This is the case with the 3D scanning, where the lighting information is already in the material texture, whether we like it or not (and it's quite complicated to remove it). However in similar cases it might be possible to create an env map and a side product of 3D scanning, and showing the env map together with the scanned object will look better. So long story short: I think there should be an option/property which controls whether the env map should impact lighting or not.

donmccurdy commented 6 years ago

So long story short: I think there should be an option/property which controls whether the env map should impact lighting or not.

There is a proposed unlit material extension that could address this case: https://github.com/KhronosGroup/glTF/pull/1163

Nehon commented 6 years ago

Regarding file formats I suggest KTX.

With all due respect KTX is not what you would call a widely used format. Choosing only this format would just add something else to support for engines when they most probably already support hdr or exr for their own environment. IMO not a wise move if we want gltf adoption. KTX is simple as long as you don't implement all the supported compression format, but if you want to support even the most common, the KTX loader implementation can become more complex than the gltf loader itself. The RGBM or RGBE encoding in PNG is not a good idea neither IMO. It introduce a non standard encoding in a standard file format, which prevent the file from being opened or properly displayed in an image viewer, which basically makes it an invalid png file.

My 2 cents on this extension: From all that has been said it sounds like a lot of work for a very uncommon use case... which is asset showcasing basically. Maybe for big terrain scenes, but for this I'll most probably have a sky dome/sphere/box set up with an hdri texture. So IMO, an unshaded/unlit material extension is far more important to have than this one (if we have to pick for priority).

There are 2 things with environment maps :

Last point... PBR pipeline has been chosen as the default material pipeline (which is great). If the argument behind this extension is lighting consistency, I feel it goes against the PBR base idea which is to have a "realistic" lighting under any kind of environment/light source. To me a PBR material cannot look "out of place" like I read above if it's properly implemented. Now if again the argument is that it could be used in other kind of lighting implementation (phong, toon, etc...), then it should be in the other kind of lighting extension.

stevesan commented 6 years ago

(New to the community - hello :))

My 2 cents: My impression is that base glTF is meant to define the object (or hierarchy of objects), but not the lighting nor any aspect of the final rendering (post effects, etc. ...although we do have 'camera' which seems odd to me). The choice to stick to PBR materials makes sense for this goal: A glTF object can be dropped into any PBR renderer and look fairly accurate and fitting with the rest of the scene, regardless of lighting. Environment-based IBLs seems to me purely a lighting concern, and thus does not belong in glTF.

On the other hand, if glTF is meant to define an entire 3D experience, then certainly you need IBLs, but a bunch of other stuff too. I don't think this is the way to go, since the use cases of a "drop-in object" are numerous, and one could imagine another spec being defined for entire scenes/experiences.

msfeldstein commented 6 years ago

We're looking into something like this, and this would be handy because our app is purely a model viewer, so it'd be nice to have one file for everything, but i agree this should be stored next to a glTF model, not inside of it.

emackey commented 6 years ago

Welcome @stevesan

My impression is that base glTF is meant to define the object

There was a lot of early discussion of this in 2.0 (see https://github.com/KhronosGroup/glTF/issues/696#issuecomment-253334702 and #746 and other issues). The general idea was that we do want glTF to be capable of sending whole scenes. But, lights and environments and such weren't ready to go when 2.0 was released, so, they're being worked on as extensions. If and when the extensions become mature and widely-supported, they can be candidates for moving to core glTF in a future version.

MiiBond commented 6 years ago

Just waiting for a Windows build and thought that I'd writeup my current thoughts on this extension:

donmccurdy commented 6 years ago

Do we need to specify how the environment lights will be treated by the materials_common extension?

No immediate plans to move forward with KHR_materials_common, focusing on KHR_materials_unlit instead.

garyo commented 6 years ago

I'm just getting started with glTF. I'll be using it for whole-scene transfer between apps, and was surprised to find HDR textures and environment maps missing. Just a data point.

MiiBond commented 6 years ago

If KHR_environment requires KHR_lights, how would that work exactly?

"extensions": {
        "KHR_lights": {
            "lights": [
                {
                    "color": [0.7,  0.7, 0.5 ],
                    "intensity": 1.0,
                    "name": "dayLight",
                    "type": "directional"
                }
            ],
            "extensions": {
                "KHR_environment": {
                    "lights": [
                        {
                            "type": "environment",
                            "layout": "equirectangular",
                            "texture": 2,
                            "name": "iblLight"
                        }
                    ]
                }
            }
        }
    },

Does this break any rules to have the lights array of KHR_environment appended to the lights array of KHR_lights rather than an override? If so, should it be something more like this:

"extensions": {
        "KHR_lights": {
            "lights": [
                {
                    "color": [0.7,  0.7, 0.5 ],
                    "intensity": 1.0,
                    "name": "dayLight",
                    "type": "directional"
                },
                {
                   "type": "ambient",
                   "color": [1, 1, 1],
                   "extensions": {
                       "KHR_environment": {
                           "type": "environment",
                               "layout": "equirectangular",
                               "texture": 2,
                               "name": "iblLight"
                           }
                        }
                    }
                }
            ]
        }
    },
ivalylo commented 6 years ago

In my experience, you ultimately want to sample your environment map as a cube-map to avoid the discontinuity when using mipmaps. i.e. because the sampling at the boundary of the equirectangular texture isn't continuous (jumps from 0 to 1),

This looks like a bug with the mipmap generation. It should know how to "wrap" the texture while resizing, and there will be no problem.

IMO, KTX support may be not bad, but if added, it sounds more like another extension. This is generally useful format if you need custom mimaps, GPU compression support, etc. So this means huge work by itself, before getting to the env extension...

Why not just expose some HDR image formats as separate extensions like the DDS extension? The env extension will not have to deal with this issue, which is really not part of this extension. The engines will decide what to support

Supporting multiple probes is more advanced feature, so maybe it's for another extension?

Making KHR_environment dependent on KHR_lights sounds cool, but... what exactly it depends on? It doesn't care about the other lighting. IMO, the environment is the most basic form of lighting, so some implementation may decide to not support KHR_lights.

I don't think letting the implementations do the convolution will create much discrepancies, since they will all need to be fast and will do some simple blur probably :)... Maybe the question is how bad will be such runtime convolution quality-wise? Imagine also if you have multiple probes, this is a stress that even game engines don't need to handle, since it's always precalculated. However, if the convolution is offline, this will require format that also supports mipmaps to handle different roughness values. It also means, that the engine's BRDF model may be different then yours... Maybe if the engine supports mipmaps, and they are provided, it must just used them on your own responsibility. Otherwise do its thing?

MiiBond commented 6 years ago

This looks like a bug with the mipmap generation. It should know how to "wrap" the texture while resizing, and there will be no problem.

This actually has nothing to do with the mipmap generation. It's an issue with how mipmaps are sampled. The hardware chooses a mip level based on the screen-space derivatives of the UV's (how they're changing across the poly). If the UV's are discontinuous (i.e. jump directly from 0 to 1), like in this case, the derivative becomes undefined. Hence the artifact.

UX3D-nopper commented 4 years ago

As the discussion regarding IBL and how to define it pop ups again, I want this extension to be discussed and reviewed. For simplicity, only panorama images should be supported. No need for spherical etc. Also, as a file format, I recommend to use .hdr files, as supported in nowadys tools plus a standardized way to describe HDR.

In addition, we need a parameter for the default orientation of "front" and/or the center of the panorama image. +X or +Z and so on. This is required, as DCC tools do have a different convention on this.

donmccurdy commented 4 years ago

I still have some reservations about creating an IBL extension, and shipping one with only panorama .hdr images feels like a particularly short-term workaround to me. @UX3D-nopper could you say more about why you would like to revisit the extension?

UX3D-nopper commented 4 years ago

As from 3D Commerce there is the demand to ship the IBL with the glTF. HDR and panorama, as it is widely used and utilized by DCC tools. Furthermore, as only the panorma image is provided, it is up to the engine implementor to use e.g. spherical harmonics vs. pre-filtered images. KTX2 is out of scope for today.

donmccurdy commented 4 years ago

As from 3D Commerce there is the demand to ship the IBL with the glTF.

I don't understand this requirement... Perhaps we can discuss more soon.

HDR and panorama, as it is widely used and utilized by DCC tools.

Unity stores reflection probes as cubemaps. three.js and Babylon support equirectangular IBL, but have to convert them to cubemaps at load time before using them, to my understanding. The projection produces artifacts at the poles, we tend to find cubemaps preferable.

lexaknyazev commented 4 years ago

The IBL storage, transmission, and usage comprises several key questions that haven't been thoroughly investigated for glTF yet. We cannot make a KHR extension otherwise. The current state of existing DCC tools shouldn't be a deciding factor here.

Note that most of the following questions do not depend on each other.

Shape

Orientation

glTF defines fixed XYZ directions and we even rejected an extension that was supposed to remap global axes. For the same reasons, standardized IBLs should have a fixed orientation.

Values Interpretation

The values coming from IBL should have a well-defined physical meaning. This implies their range and possible runtime adjustments (bias / multiplier).

Bitstream format

Regardless of prefiltering, there are multiple storage options. The final choice should take in account data transmission, runtime processing, and VRAM costs.

emackey commented 4 years ago

Also, storing non-pre-filtered IBLs, such as HDR / raw RGBE directly in the glTF, goes a little against the spirit of delivering the data in a ready-to-render form, as pre-filtered data would be.

For example, BabylonJS developed their own *.env format, (which is easy to create), to store the results of pre-filtering a *.hdr file. It would be fantastic if Khronos could offer an open-standard version of something in a similar role to Babylon's *.env, that could deliver an IBL that was pre-filtered and ready to use with a PBR model, with the orientation and exposure level firmly specified.

This could be KTX2 if it can include both diffuse and specular pre-filtered environments. Otherwise, I think we could use an empty glTF file with a KHR extension as a container for such an environment. It might even warrent a new file extension, to indicate that it contained only an environment with no model, and was intended to be loaded alongside some other glTF model file. Of course you could still bundle such an environment along with a model in a single glTF file.

elalish commented 4 years ago

I agree with @lexaknyazev but disagree with @emackey. Every renderer I know (babylon, filament, three, and the sample renderer) uses a different IBL prefiltering format, which also implies different shaders to interpret it. They have different pros and cons, different artifacts, different upfront and per frame costs. There is not at all a clear "right way" and they are in no way interoperable, yet they all do a pretty good job with PBR. Also, as I've demonstrated with three.js, it is even possible to get good results now with just-in-time prefiltering, which removes the need for transmitting prefiltered IBLs at all (since I can prefilter them in less time than a texture upload takes).

emackey commented 4 years ago

That's a fair observation that different engines need different pre-filtering.

I still think this type of extension is to be handled with extreme caution. Typical glTF models are intended to integrate into a variety of lighting environments, unless the model contains a complete scene description including the environment (which has not been a common use case so far, to my knowledge).

I've seen issues on GitHub (and I'm not naming names) where developers wanted to prevent users from selecting their own lighting environments, preferring to wait for Khronos to ship the IBL along with the model. This is not the expected default case. A typical glTF file contains a single object or a couple of objects that are to be placed into a lighting environment of the client's choosing.

garyo commented 4 years ago

I don't know about others, and I'm sure I'm out of the mainstream, but I'm using glTF as an exchange format between my front end (three.js) and my back end (blender). Right now I have my own scene file format that includes the environment (typically .hdr or .exr, equirectangular) plus the glTF scene object with pretty much everything else, because I can't represent the environment in glTF. I'd love to have it all in glTF.

UX3D-nopper commented 4 years ago

That's a fair observation that different engines need different pre-filtering.

I still think this type of extension is to be handled with extreme caution. Typical glTF models are intended to integrate into a variety of lighting environments, unless the model contains a complete scene description including the environment (which has not been a common use case so far, to my knowledge). It is a common use case e.g. Bright Little Tokyo https://sketchfab.com/3d-models/bright-little-tokyo-40ca86eb17d0418bbd1b5e5308ba346b Postwar City - Exterior Scene https://sketchfab.com/3d-models/postwar-city-exterior-scene-30b694d1a4074855a1116a15a0f75731 And there are many more scenes available. Also TurboSquid is supporting scenes e.g. https://www.turbosquid.com/3d-model/architecture And finally, glTF is supporting a scene including several root nodes plus a node hierarchy: https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#scenes
If the only use case of glTF would be to display one mesh, the scene concept would be obsolete.

I've seen issues on GitHub (and I'm not naming names) where developers wanted to prevent users from selecting their own lighting environments, preferring to wait for Khronos to ship the IBL along with the model. This is not the expected default case. A typical glTF file contains a single object or a couple of objects that are to be placed into a lighting environment of the client's choosing.