Closed lilleyse closed 12 months ago
@ptrgags @sanjeetsuhag Here's a summary of where I left off in the Model.js refactor, file by file. Hopefully this will clarify the direction I was going. At least you'll know which files to look at and which to ignore. Let me know if you have any questions, even about the smallest details (because there are a lot of important details).
https://github.com/CesiumGS/cesium/tree/model-loading
Follows option 1 above.
CustomShader.fromShaderString
takes a shader string created by the user and generates the full custom shader code. This is not the final shader used by Model
, just a piece of it.
The shader uses four input structs: Input
, Attribute
, Uniform
, and Property
And one output struct: Output
Input
- contains well known inputs to the shader like input.position
, input.normal
, etc. The full list of inputs is in InputSemantic.js
. These are derived from vertex attributes similar to materialInput.glsl
. @IanLilleyT will be adding more semantics here.Attribute
- these are the raw vertex attributes from ModelComponents
like attribute.POSITION
and attribute.NORMAL
. There are subtle differences between this and input. For example, if the glTF has a TANGENT
attribute attribute.TANGENT
would be a vec4 (.w stores the handedness as defined by the glTF spec) whereas input.tangent
would be a vec3 as there is a separate input.bitangent
derived from the handedness. This struct is mainly useful for accessing attributes that are not in input
.Uniform
- has all the user-defined uniformsProperty
- has metadata properties. The code that populates this struct goes outside the custom shader, which I did not start. https://github.com/CesiumGS/cesium/issues/9572 is involved in that.Output
- has three properties that be set within the shader: color
, show
, and pointSize
(vertex shader only)CustomShader.fromShaderString
parses the shader and returns information about it, basically what attributes, uniforms, and properties are used so that the model can optimize what data it sends to the GPU. It also tells model whether the custom shader is applied in the vertex or fragment shader. There's a pretty big decision tree for that and it gets even more complicated in CustomShader.fromStyle
.
CustomShader.fromStyle
takes a Cesium3DTileStyle
and converts it into a custom shader. Actually it doesn't always create a custom shader, sometimes it determines that CPU styling is better (like if string properties are used). See the top comment in the code for more details.
There is a long TODO list at the top of the file, but overall this file is nearly complete from my perspective, though I think the API could be organized differently, and the shader structs could be renamed or consolidated in different ways. At some point I'll need to go through TODO list and make more sense of it.
Related to CustomShader.js
. Also nearly complete from my perspective. Needs a better name.
This was the first iteration of the shader cache before I went with a different approach. For the most part it can be ignored.
This was the second iteration that was never finished. This file is meant to incorporate a lot of different systems to build the final model shader. I made the most progress on vertex attributes. Probably best to just reference this file rather than build on top of it.
Gets information about the PBR material. Sees what textures, uniforms, and attributes are needed for the shader. This file is pretty close to complete from my perspective. It's a building block for ModelShaderTemp.js
.
This is Model.js
2.0. A lot of the code was moved into other files but the code and comments for quantized attributes and meshopt are still very relevant.
This needs to be replaced with a shader builder. I started to do that in ModelShaderTemp.js
. Generally the logic is good but the new shader builder should support any number of texture coordinate sets, not just TEXCOORD_0
and TEXCOORD_1
. The morph targets approach should also be rethought.
Also needs to be replaced, but the logic is pretty good.
Gathers PBR textures and uniforms and calls czm_pbrMetallicRoughness
or czm_pbrSpecularGlossiness
. It can be called from the vertex shader or fragment shader. Good for reference.
@sanjeetsuhag Put together a local Sandcastle to see what a very basic CustomShader.fromString()
example (just set output.color
to red) looks like.
EDIT: there's a Check.typeOf.object()
in CustomShader that should be typeOf.string
. I just pushed a commit to model-loading
to fix this.
One thing I notice is that when I use input.position
or other in the shader (without a proper primitive), it doesn't throw an error, but the page starts to hang, so we'll need to avoid that in a final design.
There's plenty of other design questions I have from this, but we'll discuss this tomorrow on a call.
I started thinking about ideas for the public interface to custom shaders. I'll provide several options for discussion.
This first option is have the user define callback functions for the vertex shader and the fragment shader. This is very similar to the approach @lilleyse started. The input to each will be a big automatically generated struct. The goal here is to abstract away the internal details of the renderer, which can get a bit hairy (especially once we get into GPU styling of metadata).
This first version even uses automatically generated structs for varyings, which would have to be declared when constructing the shader:
/**
* // Struct definitions:
*
* // Automatically generated from primitive's attributes
* struct Attribute {
* vec3 position;
* vec3 normal;
* vec2 textureCoord0;
* // ...
* }
*
* // Automatically generated from uniform map
* struct Uniform {
* float time;
* }
*
* // Automatically generated from 3D Tiles batch table,
* // 3DTILES_metadata or EXT_feature_metadata.
* // If a property is used in the shader body but not supported
* struct Property {
* float intensity;
* //...
* }
*
* struct VertexInput {
* Attribute attribute;
* Uniform uniform;
* Property property;
* }
*
* // Automatically generated from varying map
* Varying Varying;
*
* struct VertexOutput {
* vec4 position; // gl_Position
* float pointSize; // gl_PointSize
* Varying varying;
* }
*/
// ShaderToy-esque style function abstracts away internal rendering details
// Note: CesiumJS still uses ES5 internally, but in these example usage I'm
// using ES6 syntax for brevity.
const vertexShader = `
float wave(float time) {
return 0.5 * sin(2.0 * czm_pi * 0.001 * time);
}
void vertexMain(in VertexInput input, out VertexOutput output)
{
vec3 position = input.attribute.position;
position.z += wave(input.uniform.time);
// czm_ built-ins are available
output.position = czm_modelViewProjection * vec4(position, 1.0);
output.varying.uv = input.attribute.textureCoord0;
output.varying.normal = input.attribute.normal;
output.varying.color = input.attribute.color;
output.varying.secondaryColor = input.attribute.secondaryColor;
}
`;
/**
* struct Uniform; // same as in vertex shader
* struct Property; // Same as in vertex shader
* struct Varying; // same as in vertex shader
*
* struct FragmentInput {
* Varying varying;
* Uniform uniform;
* Property property;
* }
*
* struct FragmentOutput {
* vec4 color;
* bool show;
* }
*/
const fragmentShader = `
void fragmentMain(in FragmentInput input, out FragmentOutput output)
{
vec3 color1 = input.varying.color;
vec3 color2 = input.varying.secondaryColor;
vec3 plusZ = vec3(0.0, 0.0, 1.0);
vec3 color = mix(color1, color2, dot(input.varying.normal, plusZ));
output.color = vec4(output.color);
output.show = input.property.intensity > 0.6;
}
`;
The corresponding setup code looks like this:
const startTime = performance.now();
// THREE.js-style uniforms. Include the type so we don't have to
// infer this
const uniforms = {
time: {
value: startTime,
type: UniformType.FLOAT
}
}
// TODO: Should we declare varyings or just require the user to do so?
// varyings don't need a value, but still are declared
// the caller is responsible for setting these in the vertex shader and
// reading them in the fragment shader
const varyings = {
uv: VaryingType.VEC2,
normal: VaryingType.VEC3,
color: VaryingType.VEC3,
secondaryColor: VaryingType.VEC3
}
const shader = new CustomShader({
uniforms: uniforms,
varyings: varyings,
vertexShader: vertexShader,
fragmentShader: fragmentShader
});
This is mostly the same as option A1, but now the user defines the varyings themselves. This is what most custom shader APIs do. It also means that no varyings need to be declared in JS, which is a nice benefit
/**
* // Note the lack of Varying
* struct VertexOutput {
* vec4 position; // gl_Position
* float pointSize; // gl_PointSize -- used with gl.POINTS only
* }
*/
// ShaderToy-esque style function abstracts away internal rendering details
const vertexShader = `
// user is responsible for defining varyings and making sure they match
// from vertex to fragment shader
varying vec2 v_uv;
varying vec3 v_normal;
varying vec3 v_color;
varying vec3 v_secondaryColor;
float wave(float time) {
return 0.5 * sin(2.0 * czm_pi * 0.001 * time);
}
void vertexMain(in VertexInput input, out VertexOutput output)
{
vec3 position = input.attribute.position;
position.z += wave(input.uniform.time);
// czm_ built-ins are available
output.position = czm_modelViewProjection * vec4(position, 1.0);
v_uv = input.attribute.textureCoord0;
v_normal = input.attribute.normal;
v_color = input.attribute.color;
v_secondaryColor = input.attribute.secondaryColor;
}
`;
/**
* // Note the lack of Varying
* struct FragmentInput {
* Uniform uniform;
* Property property;
* }
*/
const fragmentShader = `
varying vec2 v_uv;
varying vec3 v_normal;
varying vec3 v_color;
varying vecr v_secondaryColor;
void fragmentMain(in FragmentInput input, out FragmentOutput output)
{
vec3 color1 = input.varying.color;
vec3 color2 = input.varying.secondaryColor;
vec3 plusZ = vec3(0.0, 0.0, 1.0);
vec3 color = mix(color1, color2, dot(input.varying.normal, plusZ));
output.color = vec4(output.color);
output.show = input.property.intensity > 0.6;
}
`;
const startTime = performance.now();
// THREE.js-style uniforms. Include the type so we don't have to
// infer this
const uniforms = {
time: {
value: startTime,
type: UniformType.FLOAT
}
}
// Note the lack of varyings this time.
const shader = new CustomShader({
uniforms: uniforms,
vertexShader: vertexShader,
fragmentShader: fragmentShader
});
One option we briefly considered is to have methods to declare the types of uniforms before attaching the shader to the Model. However, we don't think this is good because it's too easy to call the methods in the wrong order. Passing things into the constructor would be better.
// this time create the shader first
const shader = new CustomShader({
vertexShader: vertexShader,
fragmentShader: fragmentShader
});
// declare uniforms before attaching to a primitive
const startTime = performance.now();
shader.declareUniform(UniformType.FLOAT, "time", startTime);
// now we can pass the shader to a Model or Tileset
Instead of the above callback method, we could have the user define a whole shader, attributes, uniforms and all. Most libraries do this and gives the user maximal control. However, there are some big caveats:
main() -> xxxxMain()
and inserting
a new main function to wrap it. It works but not very elegant.This is the simplest option, pass the shader in once at the constructor.
// example construction via entities
viewer.entities.add({
model: {
customShader: shader
//...
}
//...
});
// creating a Model directly
const model = new Model({
//...
customShader: shader
});
// Constructing a tileset. This shader will be propagated
// from Cesium3DTileset -> Cesium3DTile -> Cesium3DTileContent -> Model
const tileset = new Cesium3DTileset({
//...
customShader: shader
});
Another method is to not bog down the constructor with more options (models and tilesets already have a lot) and set the custom shader afterwards. This would also imply the custom shader should be hot-swappable. (Though I think existing styles work like this?)
const entity = viewer.entities.add({
//...
});
entity.customShader = customShader;
const model = new Model(/*...*/);
model.customShader = shader
const tileset = new Cesium3DTileset(/* ... */);
tileset.customShader = shader;
We could also do both.
This is a pretty straightforward option, have methods on the shader
to update uniforms on the fly. This is similar to how p5.js
does this. Simple and gets the job done.
function update() {
// p5.js-style update functions. Uniforms must match one declared
// in the constructor
shader.setUniform('time', performance.now() - startTime);
}
These methods would only work for setting variables declared at shader creation time.
Moot point for now since CesiumJS still doesn't support ES6 features, but
if we did have things like Proxy
, we could make the updates more natural
(albeit perhaps too magical)
function update() {
shader.uniforms.time = performance.now() - startTime();
}
One thing we considered was whether to allow setting additional attributes at runtime beyond the glTF itself. However, we want to avoid this for a couple reasons:
Another detail is attributes in a glTF use SCREAMING_SNAKE_CASE
which
can be cumbersome to look at. We might want to provide rules for automatically
converting variable names to camelCase
equivalents, or provide a method for
aliasing attributes.
Another thing to consider is how will this interact with materials? there's a couple scenarios:
We might want to make this configurable. We don't want to go to the complexity of a full node graph, but we could certainly select between these three methods.
CC @lilleyse, @sanjeetsuhag
This is heading in a great direction. Support for varyings was a key part missing from my original proposal and I'm seeing the benefits of it.
I prefer option A1 over A2. I feel that varyings should be abstracted away since WebGL 1 and 2 have different syntax for it. But that's not the only reason, I just think it goes outside the custom shader sandbox.
Are all varyings user defined? I figured the custom shader would be able to call a glTF PBR material function that handles the plumbing for attributes, textures, etc used for PBR. Of course the user can write their own PBR code and ignore our implementation if they want, but the one liner would be super convenient, and it's only convenient if the plumbing happens in the background.
We should also think of ways to simplify the blending process and make it a little less fixed function. Maybe a PBR struct is autopopulated outside the custom shader and the custom shader can modify it before passing it along to the PBR function. (I just read your final section, configurable is better and I think it can be done relatively simply)
I assume input.attribute.position
is object space, but is it pre or post morph targets / skinning? I think post...
Should gl_Position
or discard
in custom shaders be allowed?
Should VertexOutput
have a show
property too?
Do you think glTF 1.0 can be decomposed to this system? It'll probably end up looking more like Option A4 but I can hope.
:+1: for Option B3. We definitely want hot-swapping. The constructor option is nice too.
@lilleyse Yeah originally I was leaning towards A2, but after our discussion on Friday, I do think having automatically-generated varyings would be good.
Are all varyings user-defined?
No, this would just be for the varyings the user wants to define. There would likely be built-in ones.
In regards to the PBR handling, based on discussions on Friday and yesterday, I'm thinking that custom shaders (at least the frag shader) should both take a Material
as input and output a Material
. This way, the custom shader can be moved around the pipeline depending on the configuration settings, without having to change the shader code itself.
void fragmentMain(in Input input, in Material inMaterial, out Material outMaterial) {
outMaterial.baseColor = mix(inMaterial, input.uniform.tintColor, 0.5);
outMaterial.normal = perturbNormals(inMaterial.normal);
// etc.
}
This is inspired by Unreal Engine's node editor; Defining a material involves connecting nodes to a big struct with baseColor
, metallic
, roughness
, specular
, etc. However, you can change the resulting behavior by selecting the lighting model. See Unreal Shading Models documentation page for more information.
As far as lighting goes, I think we should have a built-in lighting stage that comes after all the material/styling/custom shader processing. It would be configurable to have any of the following lighting methods:
PBR
for glTF 2.0 materialsBLINN
, PHONG
, LAMBERT
for KHR_materials_common
support (glTF 1.0 extension)UNLIT
which would just render material.baseColor
directly. This satisfies KHR_materials_common
's CONSTANT
lighting model, and also allows custom shaders to bypass the lighting model if they want to do something custom (e.g. non-photorealistic rendering).As far as gl_Position
/discard
goes, we could either check for them and disallow them, or we can just leave it to the user to use at their own risk. gl_Position
would most likely get overwritten anyway. discard
is another story
I still need to think about how to handle glTF 1.0/KHR_techniques_webgl
. While internally it may use the same Material
struct as output, I don't necessarily think it should be forced into a custom shader function.
Yesterday, I also investigated what other engines do as far as custom shaders for comparison. I explored a few:
Three.js
adds some boilerplate code for helper functions, attributes, uniforms, etc at the top, but then inserts your code verbatimp5.js
's WebGL mode lets you write the entire shader, but then you need to make sure you declare attributes correctly based on what the engine passes in. This is not very well documentedBabylon.js
has a whole Node material editor that generates code for you. It creates just one big main
function where each node writes to a generated variable outputNN
.My thoughts on the above:
main()
function over and over again. We should think about that as we continue to design Model.jsOne caveat @sanjeetsuhag and I realized two things:
sampler2D
in a struct, so you couldn't do input.uniform.texture
uniform type identifier;
statement in the shader. So there's not much benefit for putting them into a Uniform
struct abstraction.At least for user-defined uniforms (not sure about internal uniforms), I'm leaning towards keeping them top-level instead of adding them to the Uniform
struct. This is both simpler to implement and simpler to use, as textures and other values would be treated the same way.
@lilleyse what do you think? what other uniforms would go in Uniform
besides ones from CustomShader
?
- you can't store a
sampler2D
in a struct, so you couldn't doinput.uniform.texture
Is that true? https://stackoverflow.com/a/54959746 shows an example with a sampler2D
in a struct
- When a user declares a uniform, this corresponds to a
uniform type identifier;
statement in the shader. So there's not much benefit for putting them into aUniform
struct abstraction.
I think the abstraction is useful for consistency with attributes and metadata
@lilleyse what do you think? what other uniforms would go in
Uniform
besides ones fromCustomShader
?
I think it would just be the uniforms set by the user.
Though there might be a need for built-in uniforms like model matrix or light direction/color. Some of those are accessible as czm_
properties. I wonder what other engines do here.
Some notes from talking with @lilleyse this morning:
model -> model
space. A few reasons why:
@ptrgags @lilleyse
also if you start moving vertices around drastically in world space, this would require updating bounding volumes significantly. In some cases this could cause performance problems because this could break the assumptions of a tileset's bounding volume hierarchy (in that parent bounding volumes must completely contain their children)
I tried to update the vertex position in CustomShader
(raising the vertex coordinates upwards), but it seems that the bounding volumes of the model has not been updated. Another issue is that some areas of the model have been culled by the camera. Is there any way to update the bounding volumes? Even if it is not so accurate, it is acceptable, at least it will not be culled by the camera.
@syzdev I believe a use case like this is beyond the scope of a custom vertex shader. Are you looking to exaggerate tileset height only? In that case, https://github.com/CesiumGS/cesium/issues/8809 is under development now, and would update the bounding volumes.
@ptrgags or @lilleyse Is there anything immediately actionable in this issue? Otherwise I think this should be closed.
@ggetz I agree, this is an old issue. Anything that remains for custom shaders has a more specific issue at this point. Closing.
@ggetz
I agree with your opinion that updating the bounding volumes is indeed not something that CustomShader
is concerned about. But in some special use cases, the position of the vertex may not move in a fixed direction, as in Custom Shaders Models - Expand Model via Mouse Drag, the model unfolds along the normal direction.
Although it is not related to the CustomShader
, we have to face this issue. Ceisum does not seem to expose a method to modify the bounding volumes. Can it be useful to forcibly modify the parameters of the bounding volumes in the source code? Of course, this is only a temporary method to solve the urgent problem.
We're in the process of refactoring the glTF / model system and one of the end goals in the next few months is to add support for custom shaders similar to what you'd find in other engines (see Three.js ShaderMaterial and Bablyon.js ShaderMaterial). This will give developers full control over visualization of their models and 3D Tiles.
For background CesiumJS already has various levels of support for custom shaders:
With 3D Tiles Next around the corner we have new methods of storing metadata including per-texel metadata that are ready to be unlocked.
Approaches
Two possible approaches for supporting custom shaders are described below:
VertexArray
). We could also add a callback to the vertex shader where outputs might be position and point size, and maybe some abstraction for varyings. This is roughly similar to Fabric and Post processing stages.Renderer
API. This is similar to KHR_techniques_webgl.For now I'm leaning towards option 1. A rough example might be
Engine Architecture
Already in progress, see https://github.com/CesiumGS/cesium/pull/9517
FeatureMetadata
Questions
We're still early in the design phase and there's many open questions
Cesium3DTileset
level, particularly for heterogeneous tilesets where not all contents share the same vertex attributes?czm_
built-in functions enumerated anywhere?Related issues