Open McNopper opened 7 years ago
+1 for displacementParallax 😃
I think, even the parameters can be reused:
{
"materials": [
{
"displacementParallax": {
"displacementParallaxFactor": 1.0,
"displacementParallaxTexture": 0,
"displacementParallaxOffset": 0.0
}
}
]
}
Factor is also know as the height scale. Also an offset would make sense.
@McNopper how eager are you to finish this extension? Should we focus on the lights, environment map, and perhaps static lightmap extensions before this one to best support the Blender exporter?
Wouldn't it make sense to allow 3D displacement as well, in which case the RGB channels are used, and not just R?
@pjcozzi It is also implemented experimental in the Blender exporter. However, I suggest we focus on and finalize the light and common material before end of July. Regarding the displacement, environment, lightmap and possible other extensions, we should give us some time and e.g. finalize them before GDC 2018. For the "postponed" extensions, we still can implement them in an experimental way in the Blender exporter.
For common and light extension, I recomend we should fully finalize them including schema and so on.
@Anteru The original idea having only the R channel was to displace the geometry depending on the normals. One reason for also postponing this extension is, that we somehow can encode the displacement, that the engine can finaliy decide, if it wants to "really" displace the geometry by using tessellation or "just" displace it by using e.g. parallax mapping. The RGB channels, do you want to use them as the displacement factor for each axis?
Also, as many more textures would be included, we should give us some time on how we could pack them. Futhermore, we should evaluate, how all the major engines normally "expect" the displacement data.
Yes, simply use the three channels as the displaced position, as is common for tools like zBrush or Mudbox, i.e. see here: http://docs.pixologic.com/user-guide/3d-modeling/exporting-your-model/vector-displacement-maps/ Looks like Unreal Engine 4 also supports it by using a "world space vector displacement map".
I see the problem that there's no fallback possible to parallax mapping. Maybe the displacement extension could specify fallbacks, i.e. vector displacement, then heightmap, parallax, and eventually a bump map?
Okay, got it. Think we should probably go for a separate geometry displacement version and a parallax, bump map etc. displacement version. @pjcozzi Should we discuss this next meeting? Maybe Matthäus (Anteru) can also join the next Khronos telephone conference.
@pjcozzi Should we discuss this next meeting?
Sure.
Maybe Matthäus (Anteru) can also join the next Khronos telephone conference.
That would be great. AMD is in Khronos. Email me if you would like to join us next Wednesday, pjcozzi@gmail.com
Hello everyone. Thanks for your efforts. I can't wait for glTF displacement map : o)
Hi folks; I read through this thread but I'm still a bit confused -- what's the status of bump maps in glTF 2.0? Supported at all, via extension or in core? I see normal maps but not bump. Is the expectation that exporters will run a gradient pass on bump maps to produce normal maps?
@garyo Currently yes, tangent-space normal maps are expected in core. One of the goals of glTF is to be a runtime delivery format, and converting bumps to normals at runtime is not a burden we want to place on viewers and renderers. So, it's expected from the content pipeline.
That said though, a heightmap/bumpmap could be useful to realtime shaders looking to implement parallax occlusion. So I think it's worthwhile to add it as an extension.
Any news on this feature? Would love to export the heightmaps to show them on facebook with GLTF! Thank you all for the awesome work!
Any news on this feature?
Issue #1442 is where we're gathering up feature requests for an upcoming upgrade (or refactoring?) of the PBR material in general. The issue looks like it's in need of some updating though, hopefully that will happen someday soon.
Should this extension follow the same naming convention as the normalTexture info schema? https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/schema/material.normalTextureInfo.schema.json
Eg. factor -> scale
the question is whether schema should say that allowed scale is [0;1] or [-1;1], the latter being because of "flipped" displacement texture.
thus
"scale": {
"type": "number",
"description": "A scalar multiplier controlling the amount of displacement.",
"default": 0.0,
"minimum": 0.0,
"maximum": 1.0,
"gltf_detailedDescription": "..."
},
or
"scale": {
"type": "number",
"description": "A scalar multiplier controlling the amount of displacement.",
"default": 0.0,
"minimum": -1.0,
"maximum": 1.0,
"gltf_detailedDescription": "..."
},
yet another thing that could be there, but probably not 😄 ... maybe schema could also optionally specify minimum layers which are as a suggestion by glTF author (artist) and that the end consumer might or might not take into account in his/her shaders. it is basically as a hint from author, that in steep/occlusion parallax material does not look like crap if shader implements at least this many iterations. min iterations are kind of linked with material, because you can have materials that really need many iterations to look good, and there are materials, where displacement is very "steep" / due to it's nature it does not need many iterations to look good. look good = as artist intended. this is indeed very vague (thus note about probably not) as it really depends on implementation, there is no steep/occlusion parallax standard per se, thus repeatability which is significant for it to work is compromised.
So is there a KHR_materials_displacement extension ?
This proposal has been on the backburner for a while. I think if someone could contribute some nice sample assets that make great use of displacement maps, that could help generate renewed interest.
Personally I'm not sure displacement maps are a good fit for glTF where the vertex data is intended to be fully triangulated and GPU-ready at delivery. One could easily apply a static displacement map to geometry that was delivered at full resolution, but why not just bake those displacements into the geometry in that case? I don't see the advantage of asking the runtime to do that, unless it's going to animate it somehow. The better case for displacement maps seems to be if the geometry isn't already shipped at final resolution, for example subdiv meshes. But glTF doesn't do those yet, so that should be worked on before displacement maps, I would guess.
There's a different story available with parallax occlusion. That's a much more complex shader, but offers what looks like false geometry without adding any vertices or subdivisions. I would think that should be a much more tempting target for glTF, but so far interest seems very limited.
There's a different story available with parallax occlusion.
Well, it's all about definitions I guess, where nobody has agreed on them strictly 😄
One could argue that parallax occlusion or parallax mapping is one of displacement mapping subtechniques. One has to have _some_kind_of_KHR_materialsdisplacement extension, which enables both.
Wikipedia does not agree to that subtechnique part and separates them (that is - makes parallax a separate beast from displacement)
https://en.wikipedia.org/wiki/Displacement_mapping
https://en.wikipedia.org/wiki/Parallax_mapping
However, practically speaking glTF model should contain a texture or a channel of texture that holds data for height map and that's it for both cases. Wikipedia agrees on that
Displacement mapping - (..) using a height map to cause an effect (..) Parallax mapping - (..) the value of the height map at that point (..)
and so does my practical knowledge.
Then it is up to shader implementation what you do with this data texture that ships with glTF model.
EDIT: Thus it can be actually named "KHR_materials_heightmap" that holds texture data and scalars as discussed above 😉
I just stumbled across this, and I'm interested because I find bump maps preferable over normal maps quite often as an artist, and it seems just unnecessarily less accurate and convoluted to convert to normal maps when both ends using GLTF (like, blender as a modeler, and the game engine used to render) support bump maps.
Therefore, I second @kroko 's idea of extending this to generic height maps, and I suggest a type parameter + type specific options for the use. Whether parallax, displacement, and bump map are all required to be handled as types for a conforming specification I don't really mind, but it would be nice if it facilitated more types in the future at least.
A bit like this:
{
"materials": [
{
"heightmap": {
"heightmapTexture": 0,
"heightmapType": "displacement",
"displacementGeometryFactor": 1.0, /* only present for type "displacement" */
"displacementGeometryOffset": 0.0 /* only present for type "displacement" */
}
}
]
}
Or, in case some wanted to combine these multiple types of height maps (although I personally wouldn't be interested in that and it does seem a bit overcomplicated) maybe this should be the spec's structure instead:
{
"materials": [
{
"heightmaps": [
{"heightmapTexture": 0,
"heightmapType": "displacement",
"displacementGeometryFactor": 1.0, /* only present for type "displacement" */
"displacementGeometryOffset": 0.0 /* only present for type "displacement" */
}
]
}
]
}
I'm interested because I find bump maps preferable over normal maps quite often as an artist, and it seems just unnecessarily less accurate and convoluted to convert to normal maps when both ends using GLTF (like, blender as a modeler, and the game engine used to render) support bump maps.
Not to dissuade anyone from using the format, but one of the primary goals of glTF is to be a "ready to render" format, a last-mile format, that can come across a possibly low-bandwidth connection to a low-powered mobile device and be sent to the GPU with hardly any further processing. To that end, the burden of converting all bump maps to tangent-space normal maps is intentionally placed on the glTF creation/export toolchain, not on the receiving client implementation.
Along the same vein, forms of displacement that inherently require the client to construct additional vertices are going to be a more challenging fit for the glTF ecosystem than other forms that can be implemented with just a height map and a special shader without new vertices.
That's not to say it will never happen. Low-end clients today are a lot more powerful than low-end clients of years past, making things like subdivision surfaces a desirable feature for a glTF extension. Once a subdivision extension gains traction, I would expect a traditional vertex-displacement map to be right on its heels, because now one wants to fine-tune the new vertex locations. Such models could even claim to be more detailed over lower-bandwidth connections, at the expense of some processing power needed on the client.
Aren't bump map shaders possibly even less computationally and certainly less memory expensive than normal maps? I think if you argue for low specs devices, it would make even more sense to expand it. Now just adding bump maps and leaving out the vertex displacement on the other hand also seems silly, given the data requirements are so similar.
And I get the format complexity argument. Maybe it shouldn't be added for that reason, it's good to keep the format lean. But I think if you argue with low specs devices, adding bump maps makes even more sense.
@etc0de I don't think that's true anymore. Bump maps are a bit smaller on disk, and easier to create by hand, but the advantages pretty much end there. From the Unity docs:
Modern realtime 3D graphics hardware rely on Normal Maps, because they contain the vectors required to modify how light should appear to bounce of the surface. Unity can also accept Height Maps for bump mapping, but they must be converted to Normal Maps on import in order to use them.
The cost of computing these vectors from a bump map at runtime is higher than using normal maps, which already encode exactly what the renderer needs and have higher precision. The filesizes are not so different with GPU compressed formats, either.
^All this applies to bump maps, but I have no particular opinion on parallax or displacement maps.
Ah interesting, fair enough. In that case it only makes sense from the pipeline convenience angle. (I just find bumpmaps easier to work with for some uses, and would rather not convert them when handing them into my OpenGL shader. GLTF is also useful for exporting & reimporting in other modelers, where the bump maps then also would get lost.)
I am not able to export a gltf from blender, and get the vector displacement map's to work when I render the export gltf file using model-viewer. Please help.
@shkr glTF does not support displacement maps at this time. This thread is exploring possible changes to support that, but for now you will need to either use a normal map or bake the displacement to the model's geometry before exporting, or load the displacement map separately and add it to the model in your viewer. If you're not sure how to do that, https://blender.stackexchange.com/ may be able to help.
Hello! I was surprised when I read the spec and that the material section had no support for displacement maps. I'll probably go with loading it "on the side" but that means I'll have to wrap your glft format in a... wrapper format, one to point at a gltf file and some metadata on what materials to attach a displacement map on. Not a big deal I suppose but most PBR texture resources provide a displacement map.
@Elogain Hi, rather than add a "wrapper" format around glTF, I would encourage you to design your own extension using a vendor prefix, and following the same basic template as some of the existing *_materials_*
extensions, such as clearcoat or sheen.
Following this pattern, your file will still be compatible with the rest of the glTF ecosystem. And if the extension gains popularity, the folks here can look into possibly upgrading it to a full KHR_materials_*
extension.
I recognize that single-channel displacement textures are the standard choice here.
But if, in the future, we provide some mechanism for procedural textures (see https://github.com/KhronosGroup/glTF/issues/1889), the existence of a VEC3 displacement texture extension would open up a lot of really interesting animation opportunities.
For examples, see Houdini VAT animations: https://github.com/keijiro/HdrpVatExample
This is at best a long term direction, it's a complex area, but I do think the potential integrations of procedural textures via node-based material graphs (see: MaterialX Standard Nodes) are a promising direction to explore.
@donmccurdy There's more uses you can do out of a height/bump channel, such as calculating more accurate occlusion and shadowing. And if you do reconstruct the normal from height, you might be getting a lot of mileage out of a single height channel texture, while also avoiding having to sample the normal and the occlusion texture.
The cost of computing these vectors from a bump map at runtime is higher than using normal maps
That's very dependant on what a particular renderer is bound on. If you're bandwidth bound, recomputing normal from height can be faster, and even more so if you avoid sampling occlusion.
I'd love to see this extension move forward a bit 😄
Given some of the other material extensions that there are and the potential uses it could be nice. There's workflows such as the Substance ones that also often like to export height/bump/displacement/choose-you-own-name (https://forum.substance3d.com/index.php?topic=26218.0).
Hello. I'm shamelessly politely asking what's the state of this proposed extension. I'm starting to use glTF in my pipeline, and a lot of the 3d assets I produce uses parallax displacement, and it's the only map that I need to manually reassign when importing the glTF files. I feel there's a lot of artists out there facing the same situation as me. I'm not trying to rush things up. Just trying to understand if glTF will support some sort of displacement maps in the future.
Valid to ask. Let me trigger the 3D Formats group to revisit the extensions.
Am I correct in understanding that parallax maps are a different way of expressing the same information and behaviors that a normal map or bump map provides? Or does this refer to vertex displacement? It seems like people may be expecting more than one thing from this extension, and one extension probably should not be all of those things.
EDIT: Assuming this Parallax Mapping article refers to the same thing, I think I understand now. In any case we do need to be specific about what extension this is; I don't think we should ask runtimes to support many ways of doing the same thing. E.g. bump maps (if used only for shading) should be converted to normal maps.
Hmmm, think a displayment/height map is required e.g. https://learnopengl.com/Advanced-Lighting/Parallax-Mapping
"Displacement mapping" and "Parallax Mapping" are two different implementation techniques that both take the same input: A height map (sometimes called a bump map), meaning a greyscale or single-channel image indicating how distant a given texel is from the surface along its normal. Typically these are not combined with normal-mapping.
While the goal is the same (visually displace surface details), the end result is different:
Displacement Mapping modifies vertex geometry, and typically requires extra tessellation to accommodate smaller details. In some cases, implementations are expected to subdivide the base geometry to reach the desired level of detail. This could be made to work hand-in-hand with a subdivision surface extension. The result, once such tessellation is applied, is the same as if the model shipped with those details in the first place.
Parallax Mapping requires no extra subdivision or additional vertices, and can show what appears to be real depth per texel even within a single polygon. However, when viewed edge-on, the silhouette of the model has no additional details beyond the base vertices. The detail is only visible on the viewer-facing side, not the silhouette edges. Also, the fragment shader can be fairly expensive to run.
I now think that in the long run, as hardware improves, displacement mapping may "win" over parallax mapping, for most cases. So we might only need a displacement extension, but we should consider whether that extension is asking viewers to subdivide or tessellate the base mesh, and whether that can be combined with subdivision surfaces.
What is the status on this? Seems like discussion hasn't moved in a while here
+1, would like to see how's everyone's opinion shifting on this topic
This topic lasts 3 years ...
I'm moving forward with this and will start using it soon: https://www.ultraengine.com/community/blogs/entry/2818-ultra-engine-gltf-extensions/
"ULTRA_material_displacement": {
"displacementTexture": {
"index": 3,
"offset": -0.035,
"strength": 0.05
}
}
Regarding displacement, I suggest the following extension. This one is for geometry displacement. If another displacement technique needs to be defined e.g. parallax, a displacementParallax could be defined.
From the texture, the r channel is used. The factor is used to multiply the channel. If no texture is present, the value is take as it is. Finally, the offset is added to the above value.