Closed emackey closed 7 years ago
The bar set by KHR_materials_common isn't just laying on the floor, it sank into a tarpit accompanied by some fixed-function dinosaur bones.
Lol :-)
all agreed, Shader common intermediate format have failed (there was hope of common shader description language when we started COLLADA), and common profile has been stuck in prehistoric land waiting for common shaders to take off. (Remember CGFX?)
Currently (at Starbreeze) we're using CG profile to define and visualize PBR in Maya, export, and use parameters from .dae (not shaders) in engine loader. So we do have a path from content tool to any engine with COLLADA already. This includes glTF subset whatever it's decided on single or multiple representation.
Once this settles, let's get this in collada2gltf and have a cool pipeline from DCC to web with PBR.
Maybe @mlimper @tsturm et al might want to comment here as well, in view of https://github.com/KhronosGroup/glTF/pull/643 and the other PBR-related issues.
@javagl thanks for the pointer
This is poor separation of concerns because a single shader describes more than just the material in use on the glTF model, it also describes part of the lighting environment that may not be included in the glTF.
The key to making this work is to allow PBR to specify the material properties of the model without any knowledge of the lighting/reflection environment
Totally agree here.
The bar set by KHR_materials_common isn't just laying on the floor, it sank into a tarpit accompanied by some fixed-function dinosaur bones.
This is true, especially from a point of view that considers the lighting / material capabilities of KHR_materials_common
compared to the current state of the art in games etc.. However, I still like a lot what KHR_materials_common
has achieved for our main use case (with instant3Dhub): As we are doing a lot of CAD visualization, where we just need a compact, OpenGL 1.0-style material representation, this extension really gave us the possibility to express what we really need and use. We could even use the same representation for different (also non-Web) rendering engines. Having to use a specific shader, including lighting, tone mapping and everything, will not work in many, even very simple cases. A shader is a way too low-level representation - imagine for example you would have specific requirements in your renderer, such as using multiple render targets to write information to a picking buffer. Having to glue together shader code from your asset and the one of your own engine will just result in a lot of mess. Therefore, when it comes to building a commonly accepted asset interchange format, I tend to agree with the statement of @RemiArnaud:
Shader common intermediate format have failed
I believe that the separation of concerns mentioned by @emackey is really a key point when we want to make glTF usable and successful. This is challenging, as we start from a state where things are already, in parts, mixed up. The KHR_materials_common
extension was already a great step, and it showed the need for a way to specify light sources additionally / separately, as soon as we use a dedicated, separate material representation. At this point, making light sources part of the extension was a pragmatic decision, but maybe this can also be changed within the near future.
Strawman proposal:
Thinking about the PBR extension, it could make sense to refactor lighting part our of the KHR_materials_common
spec and put it into one or more separate extension(s). Since PBR pipelines seem to use some kind of (irradiance) environment maps in very most cases, there could be one extension such as KHR_lighting_common
for the "simple" cases, like directional light and point light, and another extension such as KHR_lighting_environments
for cube maps / spherical maps / ... . What do you think?
Not sure we should address light description now. Today when we use an environment for lighting there are different format and component:
and because in webgl 1.0 you dont have texture_lod on all device you will probably provide another format like panorama or another projection. It's hard to be agree on PBR extension to transport material, not sure we should start light/environment now
I think the Model IO framework's description has been pretty successful for this purpose. I put it here for your consideration.
https://developer.apple.com/reference/modelio/mdlmaterialsemantic
It encodes the majority of common PBR materials such as Disney, and those in game engines, as well as Lambertian and Blinn-Phong properties.
It is very similar as well to what the Alembic team is considering for their preview material.
@cedricpinson agreed that lighting/environment concerns should be kept out of the PBR extension itself, to the extent possible. But the question does come up for glTF renderers, when handed a smooth metallic PBR surface, what to render? My own answer to this is that this is a concern of the renderer, of the scene into which the glTF is being loaded, and not something that can be answered inside of the glTF file itself.
This has an important implication for glTF renderers: They will need a way to allow the app developer, not the glTF author, to specify (or compute) the environment into which the glTF is being loaded. This could potentially be a cube map or a panoramic map or anything, but this map is not part of the glTF spec and is not included with a downloaded glTF file. The Three.js glTF loader could allow the developer to specify a custom environment cube when loading a glTF. The Cesium glTF loader would ideally compute an environment map automatically, since Cesium must be given a geographic location for the glTF and can render a reflection map from that location. Inside the glTF file though, there is simply a PBR material calling for a shiny metal object, with no texturemap saying what reflection might be seen in the metal.
At a higher level, conceptually, the glTF is only concerned with itself and its content - "I have an object here, this part of it is very shiny" - and not concerned with what's happening in the rest of the scene or rendering engine that loaded the glTF file. The engine has a different set of concerns - "I have a scene with lights, and I have a map of reflections from inside of my scene, and now I'm being told to load this glTF file and place it among the existing contents of my scene." The engine should load the glTF with the shiny object, and make it look nicely reflective when placed into the scene.
I'm also curious to hear from @erich666 and @jeffdr on this thread, if they have comments.
agreed that lighting/environment concerns should be kept out of the PBR extension itself, to the extent possible.
@cedricpinson @emackey no contradiction intended, that was my initial point: lights should be put into a separate spec, if we want them in glTF. Also agree that we should do the PBR material extension first.
They will need a way to allow the app developer, not the glTF author, to specify (or compute) the environment into which the glTF is being loaded. This could potentially be a cube map or a panoramic map or anything, but this map is not part of the glTF spec and is not included with a downloaded glTF file.
Does it imply that glTF (with PBR) can not contain the environment? In other words, can glTF contain a whole scene (with lights), or it's only for models now?
@mlimper Yes it sounds like we're on the same page. I do like the idea of refactoring KHR_materials_common
into a KHR_lighting_common
, separate from PBR as you say. But environment/reflection maps should stay out of the glTF entirely, IMHO.
Consider the case of a shiny car with headlights inside a glTF file. The PBR materials call for a shiny blue metallic body, but not specify what imagery is reflected on that body. The KHR_lighting_common
extension you suggested could specify that there are working light sources inside the headlights on the front of the car. I could even imagine an extension specifying there's a working dash-cam inside the car, that the user may wish to look through. The car is loaded into a scene that may have its own objects - a street lamp, a mailbox, a traffic cam, etc - but the glTF file has no idea what lights or objects may exist in the scene outside of the file. But, the scene's imagery should reflect on the shiny car body. And someday after KHR_lighting_common
is created, the car's headlights may shine onto the rest of the scene.
@lexaknyazev actually I think containing a whole scene is a simpler goal than containing a partial scene that can integrate into a larger scene. Long-term, I think glTF should be capable of both.
@emackey
containing a partial scene that can integrate into a larger scene
Agree with high-level goal, it's well aligned with "metaverse" concept, which could drive glTF usage.
The car is loaded into a scene that may have its own objects - a street lamp, a mailbox, a traffic cam, etc - but the glTF file has no idea what lights or objects may exist in the scene outside of the file.
Long-term, separate glTF files could be used for each of those entities. E.g., first glTF with environment maps/lights, second glTF with static geometry, user-selectable glTF with a car.
@mlimper
Strawman proposal: Thinking about the PBR extension, it could make sense to refactor lighting part out of the
KHR_materials_common
spec and put it into one or more separate extension(s).
In its current state, KHR_materials_common
is still a non-ratified draft. Maybe it would be better to do such refactoring instead of finishing it?
I like to think we're on the same wavelength here. Do we all agree that PBR should be well-defined, as far as the equations used to evaluate the material? Also, what about specular/glossiness? That model seems less well-defined to me, so I don't particularly want to support it.
To be safe, I'd strongly suggest that we want to have actual code for one or both shading models, not just "use the GGX function" or even written equations. It's easy to write equations down incorrectly (like how I found a bug in Equation 5 in the original proposal). It's easy to detect problems in code by testing it.
This means a sample WebGL or three.js program, that shows exactly what equations are to be used. My very long letter in this issue notes how there are differences in which form of GGX to use, which Schlick, and even gamma == 2.0 (three.js's backwards compatible choice) vs. gamma == 2.2 vs. sRGB. Let's commit our choices to code. Why? Then we can entirely claim, "if you make these same choices, your material will work the same with other applications that make the same default choices." Making sure the producer and consumer see the same thing is important to most applications. If you want to vary for some reason, use some other form of GGX or whatever, so be it, but you'll be making a conscious decision to stray from the spec and so can inform your users of this fact.
Yes, there's the whole illumination question, point lights vs. image based lighting, where you will likely use various mipmap and pre-convolve techniques for the lighting. So for a reference implementation start with just a single directional light, i.e., a single light direction sample, as the lighting used in the implementation. The reference shader itself can be simple and unoptimized, with the stress on the implementation of the material itself (which is where most of the stress is on most fragment shaders, anyway).
I should mention a simple directional light implementation of shiny metal will look unrealistic. That's because we don't live in unlit caves with single light sources. Metal is reflective, and looks bad if there's not an environment around it giving it something to reflect.
An example: here's metalness/roughness (a la three.js, but without the somewhat deceptive black background) with just positional lights.
Here's the same shiny metal with an environment.
These are from a program I was starting to make standalone (extracting out the vertex and fragment shaders from three.js and making them easy to read), that would implement a basic PBR implementation based on the proposal and three.js's implementation. I could continue the effort if I can find the time and it's considered worthwhile.
Making sure the producer and consumer see the same thing is important to most applications. If you want to vary for some reason, use some other form of GGX or whatever, so be it, but you'll be making a conscious decision to stray from the spec and so can inform your users of this fact.
Could not agree more. In the video formats world there's lots of confusion and UX inconsistencies usually because of unspecified or ignored choices such as color-space (BT.601 / BT.709 / BT.2020, plus sRGB, plus different transfer functions), color-range (full / limited), display/frame/pixel aspect ratio, etc.
A big "+1" from me for this:
To be safe, I'd strongly suggest that we want to have actual code for one or both shading models ... This means a sample WebGL or three.js program, that shows exactly what equations are to be used.
I'll have to read the "long letter" that you referred to. But this point was exactly why I stumbled over when I considered to implement the PBR, or even just the common materials extension: There is no code, and far too many degrees of freedom for the implementation. A real use, in terms of "standardization" or the goal to make sure that all clients generate "the same" rendered image for the same glTF input, can only be achieved when the number of degrees of freedom is basically limited to ... the variable names.
However, it is far from trivial to "specify" such an implementation. E.g. for the common materials, I was referred to https://github.com/AnalyticalGraphicsInc/cesium/blob/master/Source/Scene/modelMaterialsCommon.js , which seems to be the core of the reference implementation - and I'm not sure how this could be brought into a form that is appropriate to be adapted for other implementors (in other languages, and other environments in general)
@erich666
These are from a program I was starting to make standalone (extracting out the vertex and fragment shaders from three.js and making them easy to read), that would implement a basic PBR implementation based on the proposal and three.js's implementation. I could continue the effort if I can find the time and it's considered worthwhile.
Yes please! In particular, I think we should extract enough of Three.MeshStandardMaterial to be able to construct a glTF + GLSL
file (core glTF 1.0) that contains a PBR shader without any PBR extension. Getting this file will go a long way towards specifying the details of how the PBR material gets rendered. Later, we can replace the shader with the new PBR extension, and have the GLSL code as the gold standard for implementation details.
Thank you all so much for your efforts - I did comment on the topic of the Three.JS example app in the respective thread: https://github.com/KhronosGroup/glTF/issues/697
Just popping in to second the notion of not including/requiring specific shaders or lights (basically, leaving lighting up to the client, who only has to match a BRDF in the specification). The needs of lighting are pretty application specific and in any case can be neatly separated from material definitions anyway.
Concise example code is the best way to illustrate this probably, though I'd include some actual written equations as well. A lot of folks might just copy/paste this, which would be fine.
Concise example code is the best way to illustrate this probably, though I'd include some actual written equations as well.
The equations are in there for the roughness/metalness piece, see the appendix. It would indeed be nice to have the same for specular/glossiness.
Cool. We should just standardize spec/gloss to the same BRDF there. "Specular" is just a different way of providing the 'f0' value, and the simplest way to map glossiness to roughness is to just do roughness = 1 - gloss;
@emackey any action left here or is it OK to close?
Thanks all.
This is an attempt to specify the goals of adding PBR to glTF, since the question came up. I'll start with my own perspective from the Cesium point of view, and others please chime in with corrections or suggestions for realignment of concerns as needed.
The goal of PBR in glTF is to transmit high-quality 3D models to foreign engines while maintaining separation of concerns with existing lights, cameras, objects, and environments in those engines.
The problems with programmable shaders aren't limited to choice of language or platform. A programmable shader contains the lighting calculations, and must have been written with prior knowledge of (or parameterized inputs for) the lighting and reflection environment. This is poor separation of concerns because a single shader describes more than just the material in use on the glTF model, it also describes part of the lighting environment that may not be included in the glTF.
One concrete example is when a user downloads a 3D model off a model-sharing or models-for-purchase site, and the user wants to incorporate their new model into a pre-existing scene with user-supplied lighting and reflection environment. The model they download cannot make any assumptions about the environment or lighting into which it is being placed.
I'll use Cesium for another example here. A typical Cesium user might load a set of GPS tracks along with some Earth imagery. When they animate, they will see a 3D display of points moving in time around the city where the tracks were recorded. If the user zooms into street level, they want to see a 3D model of the vehicle, not just the point. So they load glTF models of their vehicles to replace the simple points. The model that gets loaded cannot make assumptions about existing lights (the Sun may be up, or nearby street lights may be lit). The model also cannot supply its own reflection map (we'll need Cesium to calculate that). However, the model can contribute its own light sources, for example if the vehicle being loaded has headlights to contribute to the scene.
This type of scenario can play out separately from Cesium, of course. A Three.js user may have a scene of their own constructed in Three.js, and wish to add one or more 3rd-party glTF models to their scene.
Cesium and Three.js can both load glTF 1.0 models. These typically come in one of two flavors: For a vanishingly tiny percentage of glTF models, a bundled GLSL shader attempts to make use of a normal map or other "modern" shading, and ends up taking complete control over the lighting and reflection map of the model in the process. For these models, existing scene lights and environment cube are completely ignored. But for the vast majority of glTF models, the model includes just a diffuse map, nothing more. In these cases, a generic glTF model has no normal map and no reflections, and looks like it came directly from the world of OpenGL 1.0 fixed-function models from the 1990's. The bar set by KHR_materials_common isn't just laying on the floor, it sank into a tarpit accompanied by some fixed-function dinosaur bones.
Enter PBR. (Cue angels_sing.sfx, enable godrays... just kidding) Realistically, it doesn't have to be pixel-perfect to still be considered a major, groundbreaking development in standards. It has the potential to allow models and even partial scenes to be transmitted to 3rd-party rendering engines and integrated with existing models, lighting, and reflections already present in those engines. No other existing model format can accomplish this in an engine-agnostic, scene-agnostic manner. The key to making this work is to allow PBR to specify the material properties of the model without any knowledge of the lighting/reflection environment, maintaining good separation of concerns between the contents of the glTF and the contents of the scene into which the glTF is being loaded. That's why the world needs PBR in glTF.