AcademySoftwareFoundation / OpenPBR

Specification and reference implementation for the OpenPBR Surface shading model
Apache License 2.0
407 stars 18 forks source link

Add volume shading support #169

Closed msuzuki-nvidia closed 4 months ago

msuzuki-nvidia commented 4 months ago

Add volume shading support.

These parameters are used in the VDF.

This PR closes #154.

portsmouth commented 4 months ago

By the way, I had no time to check what this does, before it was merged. (To understand it, I would need to reverse engineer the XML code into math/pseudo-code).

jstone-lucasfilm commented 4 months ago

@portsmouth One option to consider is running the graph through shader generation, e.g. in the MaterialX Viewer, and you can then read through the corresponding code in GLSL, OSL, or MDL. The hotkeys for this functionality are G (GLSL), O (OSL), and M (MDL).

portsmouth commented 4 months ago

It amounts to the following pseudo-code I think, so looks reasonable.

I find the abstraction of "layering" a dielectric BSDF on top of a volumetric medium (VDF) a bit artificial as discussed in the Slack, as what we describe in OpenPBR is physically not the same as placing a thin layer of dielectric on top of volume. But there is no other way to express this in MaterialX, so I guess we can assume the renderer will have to be responsible for making sense of it.

T = transmission_weight
D = transmission_depth;
C = transmission_color
S = transmission_scatter 
g = transmission_scatter_anisotropy

dielectric_base = layer(dielectric_reflection,
                        dielectric_substrate);

dielectric_reflection = dielectric_bsdf(weight = 1.0, scatter_mode=R, ...);

dielectric_substrate = mix(dielectric_volume_transmission, 
                           opaque_base, 
                           T);

dielectric_volume_transmission = layer(dielectric_transmission, 
                                       dielectric_volume);

dielectric_transmission = dielectric_bsdf(weight = 1.0, scatter_mode=T,
                                          tint = (D > 0.0) ? vec3(1.0) : C,
                                          ...);

opaque_base = mix(diffuse_bsdf, subsurface_bsdf, subsurface_weight)

sigma_t = -ln(C) / D;
sigma_s = S / D;
sigma_a = sigma_t - sigma_s;

if min(sigma_a) < 0.0:
      sigma_a = sigma_a - vec3(min(sigma_a));

dielectric_volume = anisotropic_vdf(absorption = D > 0.0 ? sigma_a : vec3(0),
                                    scattering = D > 0.0 ? sigma_s : vec3(0),
                                    anisotropy = g);

Having to do this just to get the minimum component of a vector is quite ugly though, but perhaps that's the only way to do this in MaterialX:

    <extract name="absorption_coeff_x" type="float">
      <input name="in" type="vector3" nodename="absorption_coeff" />
      <input name="index" type="integer" value="0" />
    </extract>
    <extract name="absorption_coeff_y" type="float">
      <input name="in" type="vector3" nodename="absorption_coeff" />
      <input name="index" type="integer" value="1" />
    </extract>
    <extract name="absorption_coeff_z" type="float">
      <input name="in" type="vector3" nodename="absorption_coeff" />
      <input name="index" type="integer" value="2" />
    </extract>
    <min name="absorption_coeff_min_x_y" type="float">
      <input name="in1" type="float" nodename="absorption_coeff_x" />
      <input name="in2" type="float" nodename="absorption_coeff_y" />
    </min>
    <min name="absorption_coeff_min" type="float">
      <input name="in1" type="float" nodename="absorption_coeff_min_x_y" />
      <input name="in2" type="float" nodename="absorption_coeff_z" />
    </min>
    <convert name="absorption_coeff_min_vector" type="vector3">
      <input name="in" type="float" nodename="absorption_coeff_min" />
    </convert>
jstone-lucasfilm commented 4 months ago

That's a good point about computing the minimum/maximum components of vectors, and it seems reasonable to add dedicated nodes for these common operations in MaterialX 1.39.

Although there are slightly more compact ways to express this even in MaterialX 1.38, these are such fundamental operations that they really deserve their own dedicated nodes, e.g. maxcomponent and mincomponent.

portsmouth commented 4 months ago

Slightly off-topic for the PR, but for the subsurface BSSRDF, I see that the MaterialX graph is written as:

    <subsurface_bsdf name="subsurface_bsdf" type="BSDF">
      <input name="weight" type="float" value="1.0" />
      <input name="color" type="color3" nodename="subsurface_color_nonnegative" />
      <input name="radius" type="vector3" nodename="subsurface_radius_scaled" />
      <input name="anisotropy" type="float" interfacename="subsurface_anisotropy" />
      <input name="normal" type="vector3" interfacename="geometry_normal" />
    </subsurface_bsdf>

Whereas OSL has the signature (from here):

// Constructs a BSSRDF for subsurface scattering within a homogeneous medium.
//
//  \param  N                   Normal vector of the surface point being shaded.
//  \param  albedo              Single-scattering albedo of the medium.
//  \param  transmission_depth  Distance travelled inside the medium by white light before its color becomes transmission_color by Beer's law.
//                              Given in scene length units, range [0,infinity). Together with transmission_color this determines the extinction
//                              coefficient of the medium.
//  \param  transmission_color  Desired color resulting from white light transmitted a distance of 'transmission_depth' through the medium.
//                              Together with transmission_depth this determines the extinction coefficient of the medium.
//  \param  anisotropy          Scattering anisotropy [-1,1]. Negative values give backwards scattering, positive values give forward scattering, 
//                              and 0.0 gives uniform scattering.
//  \param  label               Optional string parameter to name this component. For use in AOVs / LPEs.
//
closure color subsurface_bssrdf(normal N, 
                                color albedo, 
                                float transmission_depth, 
                                color transmission_color, 
                                float anisotropy);

which doesn't match.

Ideally for subsurface we would just have it take a VDF. In theory we could implement the subsurface by making the subsurface be layer(dielectric, VDF), but then the renderer would have no way to identify this as the subsurface, which it needs to know in practice (e.g. in Arnold the SSS uses different approximations than the general volume rendering).

If subsurface takes the arguments as in the MaterialX graph, then it's fine but MaterialX will have to define how it does the mapping to the physical VDF, in a way that matches what we expect in OpenPBR (or expose some options).

msuzuki-nvidia commented 4 months ago

@portsmouth Agreed, it's a bit weird adding VDF as a layer. I thought it should be added as a volume shader of the material but while OpenPBR is defined as a surface shader, I couldn't find other way to assign VDF.

And yes, we should add maxcomponent and mincomponent!

portsmouth commented 4 months ago

I thought it should be added as a volume shader of the material but while OpenPBR is defined as a surface shader

Well, OpenPBR defines the material as a composition of "slabs" which have a surface BSDF and an interior medium, so it does make sense to have volumetric media involved in the shader.

The current way that is exposed in MaterialX is potentially confusing though, since there is no "slab" just a general layer operator, and it is not very precisely defined what it means to layer a BSDF on top of a VDF. In theory the coat slab has an absorbing-only volume of finite optical depth, but it would not work currently in MaterialX to try to represent this as a layer (since e.g. the VDF has no associated depth).

In your implementation, it is implicit that the VDF of the dielectric_volume_transmission layer is actually the bulk medium at the bottom of the material. There is no explicit way in MaterialX to specify that this VDF is the bulk though. It could be assumed that any VDF which is present is a bulk, or determined perhaps by checking that there is no layer under VDF_bulk, if it is even legal in MaterialX to have e.g. layer(layer(bsdf_coat, VDF_coat), layer(bsdf_bulk, VDF_bulk)), but seems unclear at least.

So the representation of volumes in MaterialX currently seems rather provisional and incomplete. It is workable in an implementation, but only by making some ad-hoc assumptions that don't follow from the MaterialX spec itself.

jstone-lucasfilm commented 4 months ago

@portsmouth This sounds like a great area of discussion for proposing improvements to the MaterialX Physically Based Shading Nodes in 1.39 and beyond. The ideal forum for this would be the MaterialX channel of the ASWF Slack, where we can include key developers such as @niklasharrysson.

msuzuki-nvidia commented 4 months ago

@portsmouth Ah yes, I understand that OpenPBR "spec" defines the model properly. I meant that OpenPBR "reference .mtlx" defines it as a surface shader. Sorry I wasn't clear about this. And this is clearly because of the limitation of MaterialX.

portsmouth commented 3 months ago

In the current MaterialX graph, we represent subsurface volume and transmission volume in different ways.

For transmission volume, it's like:

layer( layer(VDF, dielectric_btdf), dielectric_brdf )

For subsurface, it's like:

layer( subsurface_bsdf, dielectric_brdf )

These seem somewhat inconsistent.

I don't much like breaking the dielectric BSDF into BRDF and BTDF, which are "layered" (as that's meaningless physically, they are just different hemispheres of the same BSDF -- e.g. the IOR cannot be different according to BRDF and BTDF). I guess that was done to allow for the unphysical tinting etc., but it should have been achieved putting the tints on a BSDF instead, IMO. (Also one can do meaningless things like layer(dielectric_brdf, dielectric_btdf), layer(dielectric_brdf, dielectric_brdf) etc. If a formalism lets you do meaningless things easily, it's probably not well designed).

I also don't like that one embeds a volume in a dielectric via layer(VDF, dielectric_btdf), as this is embedding not layering. Also does that mean that the volume sits only in the interior? So that layer(VDF, dielectric_brdf) means a volume embedded in the exterior, or is this latter form just meaningless?

But anyway at least one can interpret this transmission volume setup relatively clearly, as a dielectric embedding the specified volume (VDF).

For the subsurface, it's unclear what it's supposed to mean, as the subsurface_bsdf does not refer to any dielectric properties itself. Of course one will just assume it means "that dielectric embeds that subsurface volume", but it's not like MaterialX is actually telling you that, you just have to decide it's the only interpretation that makes sense.

What if one does

layer( layer(subsurface_bsdf, dielectric_btdf), dielectric_brdf )

Is that something different, or not?

I think this may have been a legacy from Standard Surface, where we represent it that way because the subsurface in Arnold fakes the dielectric entry/exit with a diffuse lobe, so in a sense it is representing "volume embedded in dielectric" as well but just ignoring the dielectric properties. But this is unclear and unexplained in MaterialX.

I think this needs at least clarifying, and possibly redesigning how volumes are supposed to be represented in the graph.

msuzuki-nvidia commented 3 months ago

Yes.. I hoped I could assign VDF to volumeshader in the shaderdef but MaterialX allows to do that only in the material instance for now. Nowadays it is common to define the volume properties in the definition of the shading model so we may want to add this to the MaterialX spec sooner or later.

jstone-lucasfilm commented 3 months ago

@portsmouth This sounds like a great area for proposing new specification language and/or functionality in MaterialX, and feel free to bring this up on the MaterialX channel of the ASWF Slack, where @niklasharrysson and other domain experts can weigh in.

portsmouth commented 3 months ago

Maybe @niklasharrysson can comment here?

niklasharrysson commented 3 months ago

@portsmouth We've had this up for discussion before, and I understand your concerns.

The use of the layer operator has gradually evolved over the years, and it has become this general building block, with both pros and cons. The intent has always been to be able to arrange closure like building blocks into a vertical ordering. For example to let a renderer know the ordering of interfaces composed in a layer stack.

As you have noted, its general design makes it possible to construct nonsensical configuration. So there are additional rules that one has to follow in order to úse it correctly. I fully agree that this is not an optimal design. But it was the best we could come up with at the time. This is some 8-10 years ago, and one key requirement was that it had to be supportable by vanilla OSL & MDL as well as our rasterizer targets.

The following are examples of some of the rules I mentioned:

This is vaguely documented in the spec, and can always be explained better.

This being said, I would love to have something better. I think we are all very open to improvements of the specification in this area. The key requirement still remains though, that it has to work for OSL, MDL and all the other targets. So either it has to respect the current features and limitations in those systems, or all those systems have to evolve along with the new spec.

Some improvements are already made. For example the thin_film handling is reworked in v1.39, and now this is handled by parameters on the BSDF's that supports iridescence. So at least that part is not ambiguous anymore.

portsmouth commented 3 months ago

Thanks Niklas.

The spec says of dielectric_bsdf:

If only "R" mode is enabled, that could be interpreted two ways (at least):

Using the rules as stated, it seems the case of glass embedding a volume, could be represented by either

layer(VDF, dielectric_bsdf)
layer( layer(VDF, dielectric_btdf), dielectric_brdf )

Do those mean different things, or one should interpret them as the same? The latter version allows physical inconsistency like different IOR in the BRDF and BTDF. (Our current graph does the latter of these).

Similarly, subsurface could be represented by either of:

layer(subsurface_bsdf, dielectric_brdf)
layer(subsurface_bsdf, dielectric_bsdf)
layer( layer(subsurface_bsdf, dielectric_btdf), dielectric_brdf )

And it's not clear which of those to choose, and if they mean different things or not.

In our current OpenPBR graph, it essentially uses the first of these forms:

dielectric_base = layer(dielectric_brdf,
                        dielectric_substrate);

dielectric_substrate = mix(dielectric_volume_transmission, 
                           opaque_base, 
                           T);

opaque_base = mix(diffuse_bsdf, subsurface_bsdf, subsurface_weight)

i.e. we have a dielectric BRDF layered on top of a mix of a mix containing the subsurface.

But I don't see conceptually how it makes sense for only the R mode to be enabled in this:

dielectric_base = layer(dielectric_brdf,
                        dielectric_substrate);

as light does transmit through the dielectric interface, into the subsurface medium eventually. (Even if the base is opaque, e.g. diffuse, the light transmits through the dielectric and bounces around in the thin layer, causing physical effects).

I suppose the subsurface_bsdf is representing the BTDF in this case (even though this BSDF in MaterialX doesn't know anything itself about the dielectric embedding the subsurface volume, which technically determines the BTDF as well), but it's confusing at least that the graph is using a "reflection only" dielectric "layered" on this subsurface, to represent this.

portsmouth commented 3 months ago

This being said, I would love to have something better. I think we are all very open to improvements of the specification in this area. The key requirement still remains though, that it has to work for OSL, MDL and all the other targets. So either it has to respect the current features and limitations in those systems, or all those systems have to evolve along with the new spec.

Does that in practice mean that progress on this is very hard? It seems so.

portsmouth commented 3 months ago

To be clear, I think the concept that is missing in this is the idea that a layer is always a slab of dielectric with or without embedded volume, representable as a dielectric BSDF associated with VDF (or otherwise totally opaque with some top BSDF, like metal or diffuse). All we are ever doing when layering is accumulating such slabs.

If you represent it that way, there is no ambiguity about different forms of representation etc. You are describing a material, not operating on BSDF lobes (which is not really physical, -- the final BSDF of a physical material of stacked layers is not really a linear combination of sub-BSDFs of the interfaces).

The calculus reduces to:

S_base = Slab(base_bsdf, base_VDF)
S_coat = Slab(coat_bsdf, coat_VDF)
M  = layer(S_base, S_coat, coat_weight)

Exposing the concept of BRDF/BTDF to artists seems way too low-level, IMO. When the thing we are trying to represent is ultimately just layers of material, stacked.

But I understand that refactoring it into that form is non-trivial, given the limitations of OSL etc.

portsmouth commented 3 months ago

If we can't get there with the slab concept, it could be a good idea to try to refine the wording and presentation of the current MaterialX spec a bit, to make it as clear and unambiguous as possible. Some example graphs could be helpful to clarify how it is supposed to work, etc.

niklasharrysson commented 3 months ago

Hey Jamie, offline I wrote down some answers to the specific questions you raised in your first post above. So I'll just add this in here first, and then we can look more into the other things you posted after that. Too bad this forum doesn't support threaded answers..

In the current MaterialX graph, we represent subsurface volume and transmission volume in different ways. ... These seem somewhat inconsistent.

Indeed, this goes back to the classic special treatment that SSS has had in many systems. In MaterialX it's a separate BSDF that in itself holds volumetric properties for the internal scattering. It's a legacy of how things worked/works in many renderers (including Arnold I think) as well as in OSL and for rasterizers where for performance reasons it's a separate dedicated closure. Would be nice to generalize into a single description, if that is possible to do for all targets.

I don't much like breaking the dielectric BSDF into BRDF and BTDF, which are "layered" (as that's meaningless physically, they are just different hemispheres of the same BSDF

Yep, it's because of the separate tinting in Standard Surface (does OpenPBR use that as well?). But note that this is not something you have to use in MaterialX. With the dielectric_bsdf you can set scatter_mode to "RT" to make it behave as a single physical dielectric with both reflection and transmission. Both OSL and MDL has this feature where a dielectric can be set to only reflections, only transmission, or both. Separating them is only for artistic control.

Also one can do meaningless things like layer(dielectric_brdf, dielectric_btdf), layer(dielectric_brdf, dielectric_brdf)

If by layer(dielectric_brdf, dielectric_btdf) you mean putting transmission over reflection, then yes that's a meaningless thing, as it breaks the rules of what is supported :)

layer(dielectric_brdf, dielectric_brdf) is actually very useful. It's to have multiple specular lobes, either to place a coating over another dielectric, or just to artistically have more control of the specular highlights/reflections. This was the first use case we had for the layer operator back in the day.

I also don't like that one embeds a volume in a dielectric via layer(VDF, dielectric_btdf)

Yeah it's just the syntax we choose to declare volumetric properties for a transmissive bulk volume. It's overloading the use of the layer operator and I agree it's not an ideal design. Again, the idea was to declare the layer stack in a logical ordering, for example layer(brdf, layer(btdf, vdf)) would mean the following top down:

  1. brdf for the reflection at the volume interface
  2. btdf for the transmission into the volume
  3. vdf for the absorption/scattering inside the volume

(note that the layer operator takes the top component as first argument)

But it would be great to do a new take on the specification of layering and especially for volumetrics. In particular considering the new developments you've done with OpenPBR, and slab descriptions etc. I assume this would be for a version target beyond 1.39 though, as time is short to 1.39 and this is a big subject.

niklasharrysson commented 3 months ago

If only "R" mode is enabled, that could be interpreted two ways (at least):

  • light only reflects from the dielectric interface, with no transmission at all. So the "layer" only shows the dielectric Fresnel, no base.
  • I assume that is not what you mean. So instead, it's actually a dielectric BSDF (allowing light to transmit down to the base), represented by albedo-scaling the BRDF on top of the base BRDF. That is not explicit in the spec though (you just say it is one "standard way" of doing it). So the "T" mode is effectively there as well, anyway in this "R" only layering case.

Ah, I see how this is a source of confusion. Yes there is actually two different types of transmission going on. Lets call them micro and macro transmission:

  1. Micro transmission you get from putting a layerable BSDF over something else, where by albedo scaling the fraction of light that is not reflected gets transmitted to the base layer below. So instead of being directly absorbed, the base layer gets a chance to react to it. And if this base is also layerable, the light can continue further by albedo scaling again. This happens for all layerable BSDF nodes, no matter if it has a T mode or not. For example, it also happens for the sheen_bsdf. However this type of transmission only happens on the "micro" level in-between the layers in the stack.

  2. Macro transmission is the refraction of light into, or out from, the volume on "macro" geometric level. For this to happen there must be a BTDF to control this. This is the type of transmission referred to by the "T" or "RT" modes. So if the base layer is a BTDF the fraction of light that reaches this base can be transmitted into the bulk volume, or out from it.

This could definitely need a better explanation in the spec. Thanks for bringing this up.

niklasharrysson commented 3 months ago

This being said, I would love to have something better. I think we are all very open to improvements of the specification in this area. The key requirement still remains though, that it has to work for OSL, MDL and all the other targets. So either it has to respect the current features and limitations in those systems, or all those systems have to evolve along with the new spec.

Does that in practice mean that progress on this is very hard? It seems so.

The tricky part is to make sure that what we specify can be supported by all our targets. But I think we can definitely improve on this. Some things we've already done for 1.39, and improving the docs is a must.

For the double description of transmission vs SSS, I know MDL is using a unified description for this. So it's something we should look into. And the slab concept in general is very interesting.

The best thing about ASWF is that we have all teams represented there to share ideas. So if there is a good reason for updating something that is missing in a system we can easily discuss that with all teams.

portsmouth commented 3 months ago

I think a fundamental problem is that the things you're describing are all the level of BSDF lobes. So this is all some mathematical manipulation of lobes (scaling and adding them). Of course this has happened due to the history of the development, as in Standard Surface we defined the model explicitly as a linear combination of various BRDF and BTDF lobes. MaterialX has taken this and reverse engineered it into this "informal layering" form, where it's really just re-expressing the linear combination dressed up as layering and mixing operations. At that level, it's fine to use layer(top, bottom) just as a short-hand for the simple scheme top + (1-albedo[top])*base.

But physically, layering of BSDFs doesn't make sense, as BSDFs are not things you can layer, they are just functions describing the scattering properties of interfaces. In OpenPBR we have tried to be more principled by not expressing the model as a combination of lobes, but starting from the physical layering and mixing operations describing the structure. In this more physical picture, the material consists of thick slabs of material with defined interface BSDFs, which are stacked up on top of some base subtrate. The interfaces between the layers have BSDFs (both hemispheres exist in general), but we are layering the slabs, not the BSDFs themselves (which doesn't mean anything anyway). Out of this whole system arises a final BSDF (or BSSRDF), with a (macroscopic) BRDF and BTDF, which one should be able to derive in principle, given the material model.

While in MaterialX, you are essentially writing down a lower level approximation of this as scaled sums of various BSDF lobes (perhaps with various hemispheres deleted, etc., in confusing ways). That's harder to think about and work with than the original material model (as defined in e.g. OpenPBR).

The final ground truth BSDF of OpenPBR will not in general be a linear combination of the interface lobes, so is not expressible in the current MaterialX form.

When you write:

layer(dielectric_brdf, dielectric_brdf)

and say this is very useful and represents a material with two specular lobes, to me that just means a short-hand for adding two BRDF lobes according to that approximate albedo scaling formula. There is no deeper physical significance. In reality perhaps this is supposed to mean a dielectric coat on top of a dielectric (like car clear-coat)? That has physical meaning, and we can reason about it based on that -- but the formula above does not convey that clearly.

My point of view would be that if you're going to work on this level of math with lobes, just work with lobes directly, and dispense with the "layer" terminology. And also expose the albedo functions, so you can implement the full albedo-scaling expression, not just work in the abstract. But I think it's a bit fake to think of the layer operation in MaterialX currently as representing the physics of adding layers of material, since the physical layers are never really defined.

The micro/macro distinction you make, and distinction between R and T modes, seems a bit artificial. According to that it doesn't seem to make any difference whether the top (layerable) BSDF has a T mode, so what does the mode selection do in this case -- just nothing, because this is a special "micro"-configuration case, so the T mode is implicit? In reality the mode doesn't matter because the dielectric interface is just the top of a coat, and both reflects and transmits into the coat, causing a bunch of physical effects which are not captured by the albedo scaling formula.

Also, presumably a BSDF is "layerable" in the MaterialX terminology only because it can transmit some light through into the medium beneath the interface. It could still make sense though conceptually to add e.g. a layer of opaque metal to a substrate, just the layer would be a no-op when computing the BSDF -- but e.g. you might want to have a partially present metal layer. So this "layerable" concept seems suspect as well.

jstone-lucasfilm commented 3 months ago

@portsmouth Keep in mind that layer(top, bottom) is a fundamental feature of the OSL 1.13 and MDL 1.8 shading languages, and is not unique to MaterialX or Autodesk Standard Surface. I'm open to discussing new layering ideas in a small working group, but we should include representatives of the OSL and MDL teams at a very minimum, since we'd be discussing proposed replacements for their current syntaxes for closure layering.

portsmouth commented 3 months ago

@fpsunflower Interested in what your take is on this discussion about the layering representation. Is there room for improvement, or you think it's fine as is?

niklasharrysson commented 3 months ago

@portsmouth I understand what you're saying Jamie, but we're looking at this from different angles. I'm afraid we're trying to solve different problems even.

I think the problem you are looking at is: How can we create a material description, or shading model (OpenPBR), that is 100% grounded in physics. It should obey the laws of physics at all times, and the concepts and components that compose this model must have a deeper significance that one can reason about and understand from pure physics.

And I love that! It's a great way of thinking if we were creating a new material description from scratch.

But unfortunately that's not the problem we where faced with back in the early days of MaterialX. We didn't have the luxary of creating something new where we could just implement it all yourself in C++. Our number one design goal was portability. It had to be supportable by as many of the existing systems as possible. And back then there were no ASWF where teams meet, discuss and collaborate. On the contrary, everyone where doing their own thing and there were even competing standards. So we had to find a common ground in that environment.

That's why the layering model mainly boils down to linear combinations of BSDF's in generated code, because that's what we had to work with. But I don't agree we should have exposed this in the description, for example exposed the albedo functions. We found it better to keep it abstract so that the underlying implementation could evolve over time if new layering methods were developed.

This being said, we're in a different environment nowadays. So as already discussed, let's see how we can make this better :)

As Jonathan mentioned we need to bring in other teams to this discussion, since portability is still the number one goal. But it might be good to have some type of proposal carved out first, to have an idea of what direction we want to go. The slab concept is very appealing so it might be a good start to look at if that concept can be transferred over.

portsmouth commented 3 months ago

@niklasharrysson I wouldn't say I'm suggesting we need the model to be 100% grounded in physics, as that's unrealistic in CG. At the end of the day, it's a shader that returns an RGB value..

Actually I'm pretty aligned with this:

We found it better to keep it abstract so that the underlying implementation could evolve over time if new layering methods were developed.

Totally agreed that MaterialX should provide an abstract description of the material structure. I'm saying just that the current representation doesn't quite achieve that fully (especially in regard to the representation of volumes), though obviously it's a workable system.

As Jonathan mentioned we need to bring in other teams to this discussion, since portability is still the number one goal. But it might be good to have some type of proposal carved out first, to have an idea of what direction we want to go. The slab concept is very appealing so it might be a good start to look at if that concept can be transferred over.

Sounds good. Yes I hope it's possible to make progress on some way to get the slab concept into the system, as it would clarify all these issues I think.

jstone-lucasfilm commented 3 months ago

@niklasharrysson @portsmouth

Within the time frame of the MaterialX 1.39 release, I believe one of the most valuable steps will be to clarify the language in the specification for the MaterialX Physically Based Shading Nodes, addressing any cases where the current language is ambiguous or unclear.

This will become increasingly important in the context of the Alliance for OpenUSD (AOUSD), a project that aims to integrate the MaterialX specifications in a future ISO standard for OpenUSD.

fpsunflower commented 3 months ago

Great discussion. It feels to me like we are up against the classic law of leaky abstractions. We are hoping that OSL or MaterialX can provide this abstract notion of "layering", while in practice the details of what can be layered with what "leak" into what constitutes valid input and constrains what the renderer can do in practice.

As Jamie said, the current layer(A,B) in OSL/MaterialX is mostly shorthand for A+(1-albedo(A))*B. Except we are also saying we want special cases for layer(BRDF,VDF) or layer(THINFILM,BRDF), etc ... which are a bit ugly to implement in practice. If only speaking for OSL's testrender we definitely haven't tackled these cases yet.

I think what Jamie is getting it at is we would like to have a more carefully thought out way of describing each operation, possibly involving a better type system for the individual bits (BRDF vs VDF vs slabs, etc ...). I'm all for that and I think it will ultimately help make the implementations less ambiguous and thus hopefully easier to implement. Its tricky to think through all the cases and all the implementation details though.

Then as Niklas is saying - we each seem to be thinking about a slightly different set of constraints here. It might be worth restating the end goal of what we hope MaterialX / OSL can do and what legacy constraints we have.

The hope from the OSL side was to provide a standard set of closures that all renderers would be comfortable implementing and supporting as a way for artists to describe different material models. This includes being able to implement a model like Autodesk Standard Surface, Adobe Standard Surface and now OpenPBR. I believe MaterialX is just the "nodegraph" form of such an OSL shader. From the OSL side, these closures are still very new and no one (to my knowledge) has actually implemented all of them in a production system. So we are very much open to refine them further.

I think the set of closures we have right now can do what we want albeit not super elegantly. As discussed above, there are definitely some combinations of closures that won't make sense and would lead to undefined behavior (which renderers would have to handle somehow).

Either we somehow formally describe what combinations are allowed, or we go back to the drawing board to see if we can engineer a set of building blocks that composes more naturally with fewer edge cases. I think this is what Jamie is proposing with the Slab idea, which feels like a promising direction.

I haven't studied MDL enough to know how it tackled these issues (I know it at least has a slightly richer type system in terms of making a distinction between BDRF,VDF,EDF,etc...). I am curious how it has wrangled with these topics.

jstone-lucasfilm commented 3 months ago

Building upon the thoughts from @fpsunflower above, I'd strongly recommend a first step where we resolve any known edge cases in the current rules for MaterialX/OSL/MDL layering, clarifying these cases to the best of our ability in the MaterialX specification.

In addition to providing clear rules for developers in the short term, this should provide a robust platform from which we can propose new layering abstractions in the future (e.g. slabs), with the confidence that we can accurately upgrade existing assets based on Standard Surface, glTF PBR, OpenPBR, and other shading models to these new building blocks.

niklasharrysson commented 3 months ago

That's a great analysis Chris (and that blog post was excellent).

Jamie and I had a chat offline today to go over this some more. @jstone-lucasfilm We all agree that for the short term (1.39) we'll just update the language in the spec, to explain the concepts and constraints better. Jamie will put in a new Github issue for this work.

We briefly looked at the slab concept to see what that might entail (for longer term). From a specification point of view it might suffice with some smaller changes. For example, looking at the differens between Slabs and what we have today, it's mostly syntactical changes.

The following is pseudo code for layering two slabs, for example a dielectric coat layered over a dielectric bulk:

Slab coat = Slab(bsdf, vdf)
Slab bulk = Slab(bsdf, vdf)
Slab mtrl = layer(coat, bulk)

In current MateriaX syntax this is equivalent to:

BSDF coat = layer(bsdf, vdf)
BSDF bulk = layer(bsdf, vdf)
BSDF mtrl = layer(coat, bulk)

So the main difference is the intermediate data type Slab that would make it more concise and remove the overloaded meaning and usage we currently have for BSDF's and the layer operator.

There are obviously details to work out in how this translates to different backends. But if we enforce the same constraints we have today, as a starting point, then this might be an easy task. And we could work on removing the constraints later, as the backends evolve.