Closed tschw closed 4 years ago
The proposal sounds good! How does the sketched out implementation looks like?
@tschw
I have an implementation sketched out, but I wouldn't want to waste my time coding it down without sufficient interest in merging it. Please let me know what you think!
@mrdoob
The proposal sounds good! How does the sketched out implementation looks like?
The sketched out implementation is currently in my head, so I better just code it rather than explaining every detail I thought about, so far. What I would do:
MeshBasicMaterial
The new interface specification could look somewhat like this
// EDIT: Removed. Outdated and confusing.
Something like uniform.type
is no longer needed: Since we query WebGLActiveInfo
(a WebGL data structure that contains the GLSL type) here, can use that one to choose the code path and then handle the values that need special care.
Also, uniform.value
would be removed without replacement. This is very important, because it would be confusing to keep it. The data structure becomes only a description now, making values conceptually misplaced: Any number of instances can be created from this description - and it would be a mess having to clone it.
Uniform Input sharing can happen by using a global object (such as the scene or the camera) for the data source.
Dynamic uniforms inputs would continue to exist and then be evaluated in the context of the given data source. They would be called (once per frame in the case of run time inputs) per unique source. The transform
in the example above is an example of a dynamic (compile time) input.
Refining the API will probably take some iterations. This is just an example. What matters most at this point are the concepts / semantics behind it, I think.
Yes, it would mean quite a few changes. OTOH, I think that this subsystem really deserves some changes, better sooner than later, because it could be so much more powerful / cleaner / easier to use and extend.
Do you think it would be possible to go step by step? Doing the uniforms change first for instance.
@mrdoob
Doing the uniforms change first for instance.
Which uniform change are you referring to?
I guess I could isolate the automation of .type
without much trouble. Also works without an API break, except for the property is no longer looked at.
Do you think it would be possible to go step by step?
For the rest of the changes, I fear that it would cause a lot of extra work to separate them, because I'd have to do some intermediate scaffolding on all materials to make it possible. Then, the changes affect examples and everything - and I wouldn't really want to iterate those more than twice (creating the PR and getting it ready for merge after review). We're probably talking r77dev
, because we may prefer to roll it in at the beginning of a cycle.
I guess I could isolate the automation of
.type
without much trouble. Also works without an API break, except for the property is no longer looked at.
Yep. That's the one I meant.
Two more noteworthy things about this suggestion:
ShaderMaterial
and RawShaderMaterial
will become the same thing, because when the program asks for nothing, it gets nothing,
ShaderMaterial
andRawShaderMaterial
will become the same thing, because when the program asks for nothing, it gets nothing,
I don't think I got this one. ShaderMaterial
currently pre-populates the shaders and RawShaderMaterial
doesn't. How does the new setup changes this?
@tschw
ShaderMaterial
andRawShaderMaterial
will become the same thing, because when the program asks for nothing, it gets nothing,
@mrdoob
I don't think I got this one.
ShaderMaterial
currently pre-populates the shaders andRawShaderMaterial
doesn't. How does the new setup changes this?
Once every program has a complete description that specifies its linkage, we will pre-populate exactly what it tells us to.
(Please note we're on the ticket on the larger scope than what's implemented in my current PR, in case you got them mixed up).
As far as this task is specified, we'd paste together the description just like the uniforms for a ShaderMaterial
right now.
Beyond it, I'd like there to be a well-defined, easy-to-use and lightweight (in terms of in-core code) API that lets you rewire the default shading system and inject snippets into it. Before we get there, the macro jungle needs to be cleared. This is the first step. The second half would be dependency evaluation to rid of all the #ifdef FEATURE
stuff.
(Please note we're on the ticket on the larger scope than what's implemented in my current PR, in case you got them mixed up).
Ah, I see I see. Yeah, I was mixing them up.
Here is the motivation for all this stuff:
Web application developers love Three.js, because it's THE big name out there and it's also obviously quite simple to use when getting started.
Then quite often, folks run into stuff they can't solve and hire experts, who currently have to choose between bad and worse by either
It really shouldn't be that hard. I think, we should try to turn the second case into a preferable one.
@mrdoob Making Three.js embrace programmable shading (which is really what this all comes down to) can save weeks, if not months of maintenance time on the library itself. Also, parts like animation or physical shading could easily be modularized when the renderer no longer needs to know them personally.
Yes yes! I'm totally on board!
Copying over the FAQ I posted in #8606 to have it all documented in one place.
@mrdoob
In your proposed API, how would the user share a ShaderMaterial between different objects and change a uniform value before each object renders?
This question can be understood in two different ways. I'll rephrase the question for both cases and answer both of them:
How does the user map an object property to a uniform?
var material = new THREE.ShaderMaterial( {
linkage: [
{ name: 'myUniform', dataSource: 'object', property: 'myProperty' }
// [ ... ]
],
// [ ... GLSL code ]
} );
would instruct the renderer to upload the value in object.myProperty
to the GPU program as myUniform
.
How does the user actually run code to calculate a uniform value for an object that survived frustum culling?
Same as above using {name: 'myUniform', dataSource: 'object', transform: myFunction }
.
There may also be a property specified as the input for the function.
EDIT: We should have a wrapper for a string of code, so the function can be serialized.
Some more FAQ:
What if I want to depend on a camera or scene property instead
Set .dataSource
to 'camera'
or 'scene'
.
What if the value is supposed to become a compile time constant and not a uniform?
Use an uppercase name.
What if I want to use conditional code based on a boolean or presence of a property of other type?
Set .capability
to the name of the define (*) to switch on your code.
(*) I would like this to become the name of a GLSL-snippet whose dependencies are resolved automatically, further down the road (beyond the scope of this proposal).
Having been focussed primarily on functionality and the basic design principle, I guess it might be good idea to refine the packaging a bit. A concrete API can easier be discussed.
It might be more expressive to group the structure by data source (also probably allowing the renderer to get away with less parsing / preparation work compiling a new program):
var material = new THREE.ShaderMaterial( {
linkage: {
'material': [
{ property: 'map', name: 'map', capability: 'USE_MAP' }
// ...
],
'object': [
{ property: 'receiveShadow', capability: 'USE_SHADOWMAP' }
// ...
]
// ...
},
// ...
} );
Looks like name
is bad.
Also capability
might be unclear and nesting of dependent inputs (shown further above) is awkward too. I therefore consider activates
and activatedBy
for this purpose.
It's also desirable to avoid redundancy in case of arrays. Therefore, I drop the idea of identifying compile-time inputs by being uppercase in favor of splitting what used to be name
by case.
Example:
linkage: {
'object': [
{
property: 'receiveShadow',
activates: 'USE_SHADOWMAP'
}
// ...
],
'renderState': [
{
property: 'spotLights',
variable: 'spotLights',
cardinalityConstant: 'NUM_SPOT_LIGHTS'
},
// ...
{
activatedBy: 'USE_SHADOWMAP',
property: 'spotShadowMap',
variable: 'spotShadowMap'
}
// ...
]
// ...
}
Looks much better already.
/ping @bhouston Ben, I hope you are not too busy with whatever cool stuff you are currently building to take a look at this proposal. If I recall correctly, you too were making suggestions into this direction in the past. I'd very much appreciate your feedback!
/ping @gero3 Seeing you upvoted my PR (a part of this story), you may have one or the other thought on this one as well.
/ping @vanruesc Seeing you digging in the guts of the renderer, seems you're also a good candidate to invite here.
You will probably figure what I'm up too from my previous post. For more context, see the introductory post.
I am all for introspect and automated linking/setting uniforms. I'd also like materials to be self-descriptive in terms of their defines and uniform usage.
That said, I am a little adverse to inversions of control or the specification here that contains what is essentially a DSL for how to resolve and convert things, but in an implicit way. I think that the DSL that does conversion, and sets defines to be problematic. I think it will contain an increasingly set of complex directives that few will understand, and then we'll likely augment it with optional functions to do the hard stuff, thus I am unsure about the DSL in the first place.
I do not see why this:
'object': [
{
property: 'receiveShadow',
activates: 'USE_SHADOWMAP'
}
],
'renderState': [
{
property: 'spotLights',
variable: 'spotLights',
cardinalityConstant: 'NUM_SPOT_LIGHTS'
},
Is preferable to the explicit:
if( object.receiveShadow ) defines.push( 'USE_SHADOWMAP' );
defines.push( [ 'NUM_SPOT_LIGHTS', renderState.spotLights.length ] );
There may be a bit of saving in terms of characters but not a huge bunch. But the second fragment of code is really easy to understand, the first isn't unless you know the DSL.
I also worry having a DSL is going to be an impediment to doing other types of introspection, and automatic generation of code.
That said it would actually be cool to have object, material, renderState, camera each set up their own uniform variables and defines. This would lead to better caching. One issue we have right now is that renderState defines are recomputed for each material, when really they could be computed once and shared between all materials. It would also get rid of the really long list we have currently of defines -- it would break it up into groups.
I think as well we could pass in the same defines to vertex and fragment shaders -- that would simplify things as well, just one has an extra FRAGMENT define and the other has an extra VERTEX define to allow for differentiation as needed.
I'd love materials to get the ability to create their own prefixes based on introspecting the object, material and renderState/camera. Making materials more self-contained, rather than dependent upon the internals of the WebGLRenderer would be huge in terms of decoupling.
@bhouston
Thanks for getting onto it so quickly.
a DSL for how to resolve and convert things, but in an implicit way
Where do you see an "implicit way"? Also, there is no conversion. Note that uniform.type
is already gone and pretty much anything can be uploaded as-is.
Also, this scheme allows the renderer to make effective use of uniform buffers in WebGL 2, which is pretty much similar to your observation about caching in your second post.
I think that the DSL that does conversion, and sets defines to be problematic.
I'm doing this exactly because setting defines is already problematic, not handled correctly and very messy.
Here is an important part of the concept: I differentiate between defines and defines. One are actually being capabilities, advertised by the program, requested by the renderer. These should not stay defines, but instead become requests for specific snippets to be included or not, based on dependency evaluation (outside the scope of this discussion, after this is implemented). Others are simply compile time constants.
I think it will contain an increasingly set of complex directives that few will understand,
I think, it doesn't have to be that way when we can manage to design it right. This is of course easier done collectively, hence my call for help.
and then we'll likely augment it with optional functions to do the hard stuff,
This must not happen (although I used something like it in my initial sketch)!
Some things will simply need to be coded out in the renderer and go to renderState
.
There can be transforms, but only if they are free of upvalues and can be serialized (that is, a class around a JS string). Their input will be managed based on the data source, so they can be called by the renderer in a coordinated fashion and correctness can be guaranteed.
I do not see why this [...] is preferable to the explicit
Because it requires third-party vendors to hack into Three.js instead of being able to ship a module that just runs on top of it, or hack code around it.
Also, because the renderer has to know every single case instead of just every scenario, preventing Three.js from being modular and extensible.
There may be a bit of saving in terms of characters but not a huge bunch
Redundant strings compress very, very well. Also, we'd probably do the UniformsLib
-kind-of-thing not having to repeat ourselves over and over and instead paste it together (once we have an encapsulation for GLSL snippets, we can automate it alongside with the code, but that's again outside the scope discussed here).
But the second fragment of code is really easy to understand, the first isn't unless you know the DSL.
...or unless the DSL is expressive enough to just be understood.
And sorry... But I think you err immensely on this particular point. Firstly, it's not a fair comparison because your code is oversimplified. You'd have to consider handling of shader instantiation and data dependence which would be centralized. Secondly, this kind of code is exactly the cauliflower we have currently growing in the renderer, and it's not at all easy to understand or modify at scale. To the contrary, you have to be careful not to step on a butterfly and break it (used to be even worse, though)!
I'd love materials to get the ability to create their own prefixes based on introspecting the object, material and renderState/camera. Making materials more self-contained, rather than dependent upon the internals of the WebGLRenderer would be huge in terms of decoupling.
Here you are!
That's what I'm after. Similarly making the internals of the renderer not depend on materials either. Ultimately several materials would be allowed to share the same rendering technique specifying the program (but that goes once again beyond the scope of this discussion).
Because it requires third-party vendors to hack into Three.js instead of being able to ship a module that just runs on top of it, or hack code around it.
The code example I gave didn't have to be inside of the WebGLRenderer -- to me that is a separate choice than whether or not one uses an explicit programming or table drive approach (maybe that is a better term than implicit.)
We could use an explicit approach but pass in the list of defines to add to, and other types of derived state to the Material.updateProgram() function or something. And MeshPhysicalMAterial.updateProgram() could call MeshMaterial.updateProgram to handle the basic parameters that share between multiple mesh materials.
If one wanted a table driven approach to how state is updated, one could make TableMaterial.js which uses a table to compute the results of updateProgram(), and one could derive other table-driven materials from TableMaterial rather than Material.
So could we decouple how a Material setups its program from WebGLRenderer, even if the Material doesn't do it super efficiently or fully automatically (so more of a refactor of what we have except we are introducing a decoupling of a Material's defines/uniforms etc from WebGLRenderer), and then as a second step aim to automate how Materials setup their programs using a DSL/table driven approach?
I guess I view this as less risky. Even if the DSL runs into problems, we at least got the Material-WebGLRenderer decoupling achieved.
Making materials more self-contained, rather than dependent upon the internals of the WebGLRenderer would be huge in terms of decoupling
That's what I'm after.
Yeah, I think we are after the same goals. I just have bad experience with attempts to make expressive DSLs or ones with various types of predicates - I find that one ends up spending a ton of time on designing the DSLs, when I really just want a decoupling of Material from WebGLRenderer - even if the decoupling has the same flaws as the current system in terms of rats nest of glsl code, at least I can add any type of parameter I want to my material and have it update without messing around with WebGLPrograms/WebGLProgram, etc.
I might be in the minor opinion though on this matter. I am willing to go with the flow...
@bhouston
[...] So could we decouple how a Material setups its program from WebGLRenderer, even if the Material doesn't do it super efficiently or fully automatically (so more of a refactor of what we have except we are introducing a decoupling of a Material's defines/uniforms etc from WebGLRenderer), and then as a second step aim to automate how Materials setup their programs using a DSL/table driven approach?
Let's take a look at it.
How does the renderer figure out when to instantiate a new program?
It can be figured out from the table, making it the obvious and simplest choice for me, but let's say we add a callback, so material always sets all its parameters per object. Oh no, wait... That won't be very efficient, so we'd use multiple callbacks or methods (per object, per material, per camera ...). A table can be looked at multiple times without adding much complexity and can be transformed into a more efficient form, a call hierarchy OTOH is more expensive to evaluate, especially when you are interested in different cases. I'm thinking about sorting by program instance in the renderer while not having to shadow all uniform data, for instance...
So I guess we'd need every callback twice for uniforms and defines? What about their interdependence?
I currently don't see how this could end up any simpler than a table-driven approach, especially since we are duplicating the dependencies (currently suboptimally managed with defines in GLSL) once again in JavaScript client code for the uniform setup (instead of making preparations to get rid of the defines at GLSL level). Building extensions compatible with the default shading system becomes even more cumbersome.
What I do see, however, is that there would be a lot of calling around, and lots of refactoring necessary to bring this API into existence - even more to get it fast, if possible at all. It is more distant from the API people are used to (.type
, .value
), which one could say was "declarative with lose ends". Also, going declarative later would break the API twice instead of just once - and in a fairly radical way.
We would first have to separate programs (or "techniques") from materials at scene graph level, because the data model is currently incomplete in the extensible case: If your scene contains 1000 parametrizations of a ShaderMaterial
with the same shaders, their code exists 1000 times in the file format, giving us a very obvious hint that something must be missing. Programs and materials are actually two different things.
It would be another route. I'm currently not at all convinced that it is a better one, but my argumentation might be biased from thinking a lot more about the declarative approach.
I like the idea of chopping that one up into smaller pieces, though. It might be possible to do a "backend first" implementation; rework the program cache and establish an internal API then used by the renderer, doing the interface in a second step. I'm quite not sure - I have to think it through and look deeper into it.
Yeah, I think we are after the same goals.
Ultimately to shape this library to the better :) Often approaching problems differently, which can be very useful when designing software.
I just have bad experience with attempts to make expressive DSLs or ones with various types of predicates - I find that one ends up spending a ton of time on designing the DSLs,
This is a valuable point I must consider, however. In fact, it sounds like a trap made for me to fall into - since I love having everything thought out and just write it down at some point. This usually yields good quality of implementation, but takes more time and bears a risk of over-engineering.
Expressiveness is mostly about finding good names. I'm planning to submit the code at the beginning of a release cycle, so there's some time to refine this part.
Predicates indeed should be avoided. I don't see I'm using any, really. Note that I don't see activates
, activatedBy
as predicates, since these are in fact already dependencies (it will read a lot clearer once the names actually refer to "chunks" instead of defines) and allow to filter what needs to be updated for a particular program instance upon compile (a call based approach would always have to perform all checks).
when I really just want a decoupling of Material from WebGLRenderer - even if the decoupling has the same flaws as the current system in terms of rats nest of glsl code, at least I can add any type of parameter I want to my material and have it update without messing around with WebGLPrograms/WebGLProgram, etc.
As you may have noticed, the suggestions scoped here follow a bit of a larger plan. But this issue is already complex compared to the average Three.js PR / suggestion, and I figured that if I wanted you guys to get aboard, I'd have to draw the line somewhere.
So being completely honest, in the longer term I want a little more than just decoupled materials, but a
I might be in the minor opinion though on this matter. I am willing to go with the flow...
Chances are, I end up there. Although, I'm still hoping you may have a change of heart after reading this post :) and, even more importantly, continue being helpful when going with the flow - in whatever direction.
Thanks for taking the time!
My last post was long... I guess it might take a while until someone gets around to reading it as a whole.
Summary of the key points:
1.
if( object.receiveShadow ) defines.push( 'USE_SHADOWMAP' );
defines.push( [ 'NUM_SPOT_LIGHTS', renderState.spotLights.length ] );
in a method of Material
(or some Program
class) is very much oversimplified. In practice, the code will have to precisely mirror the dependencies at GLSL level to be correct and reasonably efficient. There would need to be different entry points if we don't want to run the whole enchilada for every object being rendered. In reality, we would end up duplicating a fair amount of mess we should better get rid of with a non-trivial mapping in respect to the GLSL side.
2.
Imperative entry points can only be evaluated as a whole and therefore are, even when optimally coded, easily outperformed by a declarative approach, which can easily be filtered and transformed as a precalculation step.
Querying an imperative specification by different criteria for this purpose means calling it several times with different objects, on which it would call back. This approach would in fact deduce a declarative specification internally. It is so complicated, and also prone to deoptimization in JS, that one really shouldn't go there IMO.
3.
A declarative specification is closer to the old uniform API, can easily be serialized, allows a refactoring of UniformUtils
in its first iteration, and probably compresses better (the redundancy goes in one spot).
These are very strong arguments. I may not be entirely objective, but I have serious doubts we can find enough pros on the other side to outweigh them.
[nerdy]
@bhouston
table-driven approach (maybe that is a better term than implicit) We could use an explicit approach...
@tschw
a call-based approach would always have to perform all checks
Actually we're talking declarative (story is told by data) vs. imperative (story is told by code) here, as opposed to something being explicit (expressed directly) or implicit (not expressed - an effect of some cause).
[/nerdy]
I can not dedicate a huge amount of time to this discussion and I think it comes down to personally philosophy. I do try to avoid complex code even if it means less than optimal performance. I do this because I've been burned by people who have coded optimal solutions at Exocortex to then find out that these solutions are too complex to be maintained properly by others or that they are so optimized to specific specifcations than when the specifications/assumptions change everything is basically useless and has to be rewritten.
If you can find examples of this in UE4, Unity or other game engines and that this is accepted best practice, I'll accept it. If you can not, I worry that it isn't likely going to be a long term robust and maintainable solution.
Some of my perceptions are colored by the your Animation contributions, I feel that while your contributions improved its efficiency -- probably close to what was possible in terms of optimality -- it has significant costs in terms of increased code size and thus maintenance and gork complexity. The code size nearly doubled in that rewrite (especially if you include the new math interpolant stuff.) In particular AnimationMixer grew from 5kb to 27kb, now it is the second largest file in ThreeJS, the only one bigger is WebGLRenderer. I believe that no one has contributed to these rewritten classes but yourself since then -- it may be caused by the code being perfect, but it may also be partly caused by the code being complex and thus discouraging contributions.
I would rather have a slightly less optimal code if it could half the code base and make it easier for everyone contributing to ThreeJS to extend and maintain.
Maybe this proposed contribution of yours is different, but I fear that you are focusing on performance optimaloty of code rather than balancing performance with maintainability.
I do not want to be a blocker on this so I think I should recuse myself from this discussion and let others chime in. If a PR works on all shaders and is elegant, even if declarative, I'd support it.
I with limited weight here would say don't let the weak links limit those with more ability and so I would say disabling functionality because other people can't handle or achieve it is not the right way forward. "Let them catch up if they can." Still the wisdom and experience of Ben is valid and something important to consider. [Previous remark reworded "Let them eat cake."] I'm one who would say creation of all geo should have the ability to take an object {BTW that PR was rejected as a bad API here still I plan a proto injector for myself which may be added as a helper if argued well}, so passing arguments in an object to a constructor etc. and searching it intelligently with defaults for settings/arguments and NOT be "declaritive" with a long list of arguments in a specific order. My recent PR for adding indexing to Object3D children has a findObject function that works in this way too (its an approach I would call intelligent design because it sorts out what every you give it). As a final thought for those maybe too focused on THREE, maybe ask the question in this way "How does it integrate with Graph QL? ". Probably not as off topic as you might initially expect. Think reactive database syncronization or your missing what the future has brought to us today. In the end thsee ideas probably belong at a higher level built on top of THREE so it's imparitive to have a version 1.0 (some lock down soon) unless you're prepared to add these kind of injectors etc., which the 3rd party tool can still do anyway. Anyway it seems well thought through. I'm still getting oriented and have really no knowledge of this stuff yet. I just think if I can explain my thinking enough for you to consider how these other ideas from Meteor etc. and Reactivity etc. integrate then I defer to your expertise for finally implimentarion decisions. Thanks for listening. Oh no keys should have quotes in my opinion.
@bhouston
Slightly less optimal code [ ... ] balancing performance with maintainability.
I'm primarily doing this for maintainability and extensibility (see above). An imperative API would be substantially harder to maintain - not just for us, but also on the end of the client, and it does not allow to be automated.
However, I cannot entirely ignore performance: What happens for every object in the renderer is the critical code path on the JS side. Running a nested call hierarchy doing lots of redundant stuff there, may not make it just slightly slower. That doesn't mean I have to make everything as fast as ever possible, but that I will be the one to blame when the renderer gets a lot slower. For example, we currently lay out uniform lists of everything a program really wants (based on the defines) at compile, and I'd really like to keep preparation work like this, because I know that it works. I also do not want to close the door for future optimizations, that can more flexibly be implemented on a data structure than on a call hierarchy.
Some of my perceptions are colored by the your Animation contributions, I feel that while your contributions improved its efficiency -- probably close to what was possible in terms of optimality
Naw, not even. In fact, I think I could still make it faster by a factor of 2 to 4, maybe even more.
-- it has significant costs in terms of increased code size and thus maintenance and gork complexity.
In the aftermath, I was thinking we might have been better off sticking with the earlier version of my PR, we then had that heated discussion during review, which, believe it or not, went under my skin as well, and I partly let it out on the code. By the time you were suggesting to go with it as-is, the next round of commits (the ones that made it "perfect", in a sense of being the worst enemy of "good enough") was already on its way.
OTOH the code is not unmaintainable at all, and I think it actually improved:
Better performance was really needed, because we actually had quirky workarounds it in some of the examples in r73
in order to use the new animation system - it was too slow to be applied in a straightforward way. Being able to quickly interpolate arbitrarily wide arrays of parameters (also in combination with mixing) is tremendously useful for prototyping work. Three.js is often used in prototypes. Smooth interpolation is also useful in this context, and also could let you cut your keyframe data in a production scenario (someone could add an external node script that does the fitting, for example).
Further, there's lot of whitespace, intermediate variables and more comments that were added. It's not all additional complexity, so ...
The code size nearly doubled in that rewrite (especially if you include the new math interpolant stuff.) In particular AnimationMixer grew from 5kb to 27kb, now it is the second largest file in ThreeJS, the only one bigger is WebGLRenderer.
...although I don't deny there is some, this is not really a fair measure for it. Also, AnimationAction
moved into that file.
I believe that no one has contributed to these rewritten classes but yourself since then -- it may be caused by the code being perfect, but it may also be partly caused by the code being complex and thus discouraging contributions.
It's "perfect", note the quotes :). So, yes, I do get your point. That said, we can't really know whether the r73
code would have been more inviting for folks to hack into. Sure would have needed it more, though.
Further iterations may let me make it smaller again (factoring out the caching logic) and also sack the additional factors of speed that are still out there. It's currently not on the top of my list, but not forgotten either.
I can not dedicate a huge amount of time to this discussion
That's fine. You dedicated some time and I'm grateful for that. Even if your reaction is not as hooraying as I had hoped, knowing your concerns is just as valuable.
I do try to avoid complex code even if it means less than optimal performance. I do this because I've been burned by people who have coded optimal solutions
I'm alarmed now not to go too wild on that end, for just one example :). As explained above, I have to make sure it won't end up less than acceptable (slower than ever before) performance, however.
If you can find examples of this in UE4, Unity or other game engines and that this is accepted best practice, I'll accept it. If you can not, I worry that it isn't likely going to be a long term robust and maintainable solution.
That's a bit of a limiting view, because we have to deal with different constraints, such as our existing API & code, library size, JS, and WebGL.
That being said, I'm currently taking a good look at those engines.
You won't find an equivalent data structure exposed, because data binding is already automated there (which is some PRs left to go for us). Automatic linking requires a data structure so, yes, there must be something like this inside somewhere.
From what I found so far, there appears to be a switch between compile time values and uniforms on demand in both of them and it's of course an interesting point to consider for us as well.
For Unity3D see http://docs.unity3d.com/Manual/SL-PropertiesInPrograms.html Caption: "How property values are provided to shaders" (there's no ID in the html): A Unity3D Material
in is in fact an encapsulation of the program and the per-instance data (e.g. albedo) comes from MaterialPropertyBlock
s (not so sure I like this kind of naming).
UE4 looks a lot more interesting: A Material
is a very high-level thing, intermixed with multipass concepts, blueprint evaluation and baking considerations, so it's not quite so easy to figure out what's going on down below from the rather brief docs on shader development. I'm currently building it to get some hands-on experience. Usually helps before digging into the source, also wanted to so anyways.
From what I can see right now, geometric considerations are split from true (material-) shading in a fairly straightforward way at HLSL level. This is not a simple vertex/fragment stage separation, but both parts can actually contribute code to both stages. One binds to object-level properties and the other to its material. A material shader can be generated for all available geometry stages by setting a flag. Properties in the structures appear to be matched against each other automatically. Except for data structures, shader setup is a one-liner in C++.
https://docs.unrealengine.com/latest/INT/Programming/Rendering/ShaderDevelopment/index.html
The inputs in the shading code are generated on demand:
https://github.com/EpicGames/UnrealEngine/blob/release/Engine/Shaders/Common.usf#L83
apparently related https://docs.unrealengine.com/latest/INT/Engine/Rendering/Materials/MaterialInstances/index.html
Reading
https://docs.unrealengine.com/latest/INT/Engine/Rendering/Materials/CustomizedUVs/index.html
it appears as if UE4 might actually be capable of compiling some blueprints to the vertex shaders (!).
Also there are (at most two) auxiliary data sources per material:
(pretty much modeled as scene properties in my proposal). More to come on this front - UE4Editor is halfway built...
I do not want to be a blocker on this so I think I should recuse myself from this discussion and let others chime in. If a PR works on all shaders and is elegant, even if declarative, I'd support it.
Great! I honestly think it is one.
And it doesn't mean it always has to stay the API - it may very well move to implementation level at some point, but we have to go step by step. I'm confident that it will be a step into the right direction.
@MasterJames
Let them eat cake.
Exactly! Since cake is better than no cake. And trust me, they will eat it and love it :)
IOW: Don't underestimate programmers! C didn't, and became very popular exactly because of it.
I'm one who would say creation of all geo should have the ability to take an object {BTW that PR was rejected as a bad API here still I plan a proto injector for myself which may be added as a helper if argued well}, so passing arguments in an object to a constructor etc. and searching it intelligently with defaults for settings/arguments and NOT be "declaritive" with a long list of arguments in a specific order.
To get the terms right: Actually what you are proposing is declarative (a story being told by data - not by code, and I think that's often a good thing) and more explicit (the meanings of the arguments are expressed directly) than when taking many ordered arguments where their meaning is not expressed at all and implied by their order. One could say that calling a constructor with many arguments is a more imperative style of programming, but one may argue that an argument list can be apply
ed to a function and thus be seen as kind-of an array, consequently the argument list would still be data, somehow.
In code:
// explicit code because the meaning of the argument is directly expressed
// the argument is declarative
new MyClass( { stuff: comesHere } ); // I say that it's "stuff"
// much more flexible in terms of defaults, yields clearer code
// though costs an extra object
// implicit code, because the meaning is implied (not expressed)
new MyOtherClass( comesHere ); // I don't say what it is
// arguably a more imperative style of programming
// imperative style, explicit, but not declarative at all
var x = new MyThirdClass(); // do create!
x.stuff = comesHere; // *do* assign!
x.moreStuff = goesNext; // *do* assign!
// can still do this when the extra object in the first variant is
// too high of a cost
My recent PR for adding indexing to Object3D children has a findObject function that works in this way too (its an approach I would call intelligent design because it sorts out what every you give it). As a final thought for those maybe too focused on THREE, maybe ask the question in this way "How does it integrate with Graph QL? ". Probably not as off topic as you might initially expect.
Absolutely not! And exactly what I was just thinking.
We're at an almost philosophically abstract level comparing these two cases, but design principles are often universal and kind-of philosophical as well.
Think reactive database syncronization or your missing what the future has brought to us today.
Well, I see that part as one of those trends that matter for a while, because there is something new to consider. You might as well go SQL, or XML/XPath/XSLT, those too solved real problems, believe it or not, tables encoding state machines, ... or, wait... I might have an even better one:
Data can be filtered, transformed, sorted and rearranged - and those tasks are pretty easy and can often even be automated. Now try that with code! It won't work - it always needs to be changed, plain and simple, or "refactored" if you want to sugarcoat it (better done with the cake than with cauliflower).
Or how about the popularity of the spreadsheet?
IMHO these trends all boil down to really just one message: Love data-driven appoaches! They make things easy!
Folks don't use them because it's sooo tedious to come up with data structures, also our heritage from the predominance of OOD over many years may add to this problem (getters and setters, active boilerplate all over the place declared to be good style). Instead there's a lot of hacking and messing around.
Currently the evangelists chant their choir about the granularity of protocol when synchronizing a phone to a database (or server mapping it). Maybe because web services became so easy to implement for developers to do what they always do: Mess things up :). Yes, of course there's a shitload of traffic on the wire when you call back and forth all the time, what were you thinking? Oh yeah, that's so called "easy" because you can easily agree on it in a meeting and add more crap to it without ever thinking. But then again, we all do that. And it's often perfectly reasonable because there's not much time to think, or the phone is ringing, or want to have it running already, or there are thought-terminating cliches in the team, so won't make sense trying to convince the others, we're taught it would be good, or it turns out to work well enough under certain constraints.
It actually happens on many levels. And ultimately leads to software becoming crappier as rapidly as hardware becomes faster. Now considering that hardware became exponentially faster over a decent period of time that's quite an achievement!
https://en.wikipedia.org/wiki/Moore's_law https://en.wikipedia.org/wiki/Wirth's_law
There's no common sense in the software industry, that's why there are these crazy hypes. All the sudden something works when you do it right. Oh what a surprise, let's write another book! Non-functional requirements are often just simply ignored as long as everything works somehow. Would I go off topic when I started on IT security, now? Probably... However, there's one or the other scary story about lack of thinking as well.
When you think enough about software, you can get it right. Just have to take the time and actually do it.
Thanks for listening.
Thanks for your feedback! Hope you could make some sense of my reply. I don't mind being considered old-fashioned.
Oh no keys should have quotes in my opinion.
Are you just expressing personal taste here?
I like to quote them to clarify their distinct meaning from ordinary properties (that's why I did here, for the sake of this discussion). I usually use single quotes to refer to what I like to call "technical strings": URLs, paths, file names, keys... I use double quotes for literal ones, that is UI messages or parts thereof.
Thanks again for the full effort in your responses. Final note on keys with quotes. It's in the spec for json but it's not needed and i disagree with using them, maybe as usual (like paths on the command line) for a space but that's silly or avoidable. They waste space it's a preference but one founded on fact of experience unless for discussion as you say. When writing my rant to instigate thinking I realized there's usually room for interpretation and confusion so thanks for the clarity on declaritive etc. So say you have a system like meteor with the data stored in a database it syncronizes globally and is done in the most optimal way of which Graph QL is yet another enhancement . First off it's complicated!, but to the user not so much (client side mini-Mongo etc.) Anyway if the data arrives as an object and you create a box and the size changes on the data it's suppose to be reactive and change the box for all subscribers. Does THREE allow that today? Not at all. You'd have to scale it instead or recreate it (yes it's hard to meld these because of GPU optimizations position much better but should animate there). Can I pass the object directly to the constructor? Not at all, you have to convert it to an array of arguments. Well its maybe not being designed for what appears to be the way forward from here, (which could change but we dont know how the future will be, but still one must try to "will" it to be superiour.) Does THREE need to be designed for this, maybe it's better not. I'm still contimplative on what level that should be but putting the ideas out there will help to solve that quandary through discussion. So again thanks for the opportunity to be heard. Maybe I'll find sometime to make a separate discussion later or while I'm hopefully I get out of the dog house from crashing machines with my spotlight optimizations. Since we went off topic some what. I'm not sure if complexity is bad for THREE as its pretty heavy already. Making GPU low level stuff more accessible is surely the main goal. I thing passing objects around is the way forward with js objects or json well bison and the code sould really be not only declaritively data driven but syncing from the object itself. Another radicale demonstration I stuffed in the original new FPS object I made in my spotlight examples (purged now) showed how to base an object into the class so the development can see in real time what's changing inside it so no need to call for the value from the class it just updates it where the system already has put something for it. Yes you can still call a function to get the average FPS etc. But it's an example of how you actually don't need to. Same rules apply the other way the data changes on the database and it should just propagate inot the class instsnce without function calls. Another example do I even need a callback if I'm waiting for the data to change? No when a function [aka promise (in everything these days, React for a current example)] is handled this way. Meteor will propogate the data directly into the instance. Yes we live in a complicated landscape with complicated problems and it takes time to solve, explain and learn to understand the complicated solutions required. Eventually the code crystallizes into a simpler solution by boiling down and distilling out the redundant just like I've explained how a callback can be made obsolete, it's more of a duplicate and redundant. So keeping it simple for average people? Well it maybe the wiser route, but in the long run probably not in light of these examples aka my opinion. One surely worries about different add-ons injectors variations forks etc. Getting way outta hand and wishes for a clear definition. Again it's all just argument to stabilize a version 1.0 and like Ben said maybe we need to follow the other players, but I think this being the JS solution of 3D and on a slow course to dominate (which Clara demonstrates to the industry as well as a moniizable delivery platform) it's time to inovate and lead them. And so that's how I see things needing to be. I think we are on the same page even if I still think we should round up rad to deg etc. Lol
@MasterJames I worked with object oriented and hierarchical databases long before they were as popular as they are now. Not just a little, but several years of one of my "previous lives", actually. It already made sense back then, so I'm not at all surprised to see a trend, now that there are new constraints in the mix.
Before there's a named recipe, someone actually has to cook it. If you love cooking and have been doing it for a while, you may find that a new recipe often is just a name for stuff you've cooked before, simply because it happened to be delicious ;).
That doesn't mean you should always cook your own soup. It often makes a lot of sense to use some trendy technology, because that's what people are familiar with. However, I don't get too excited about these trends, but instead try to keep my feet on the ground and pick the best solution based on common sense.
For example, I don't think you really need to look at hierarchical databases to see that functions with many positional parameters add inflexibility and make it harder to learn an API. Next I'd consider how much it would simplify the loader code, ... The bottom line being that good software design is pretty much ageless, while trends are not.
like Ben said maybe we need to follow the other players, but I think this being the JS solution of 3D and on a slow course to dominate
The other players are certainly worth looking at. It's well-designed software written for constrained environments.
Reading Ben's concluding statement, it seems he's kind-of correcting the "follow" to an "evaluate and do what's best". I'm pretty sure he's actually well aware of the fact that JS can be quite different from C++, and that we have take one step at a time.
Also, what he's suggesting can be a very powerful technique in C++ (especially in cases where you can't really use data structures because of heterogeneous types). Here is how I implement serialization / deserialization and property access from a scripting environment for objects of two different classes
template< class V, typename T >
void visit_properties(orientation_controller<T>& _, V& v)
{
typedef orientation_controller<T> c;
v("position", _, & c::position) // a vector
("orientation", _, & c::orientation) // a quaternion
("translatory_velocity", _, & c::translatory_velocity)
("gyratory_velocity", _, & c::gyratory_velocity)
("translatory_acceleration", _, & c::translatory_acceleration)
("gyratory_acceleration", _, & c::gyratory_acceleration)
("translatory_friction", _, & c::translatory_friction)
("gyratory_friction", _, & c::gyratory_friction)
("global_rotation_x", _, & c::global_rotation_x) // flags whether rotation is global
("global_rotation_y", _, & c::global_rotation_y)
("global_rotation_z", _, & c::global_rotation_z);
}
it also works with setters and getters:
template< class V > void visit_properties(console& _, V& v)
{
typedef console c;
v("opacity" , _, & c::get_opacity, & c::set_opacity)
("background_relative_opacity", _, & c::get_background_relative_opacity,
& c::set_background_relative_opacity)
("line_offset" , _, & c::get_line_offset, & c::set_line_offset)
("visible_lines" , _, & c::get_visible_lines, & c::set_visible_lines)
("line_length" , _, & c::get_line_length, & c::set_line_length);
}
. The framework then synthesizes different opaque entry points, by specializing the Visitor V
with different Strategies. I add this one function to a class file and three methods are implemented automatically (and I can easily make it more without touching any of the classes). Note that there is no built-in class reflection in C++ like there is in Java or JS (no for...in
loop), so names no longer exist in a compiled program. If you want to keep them, have to list them somewhere at least once.
Thanks for your responses @tschw. I think I do want to step back from this discussion. Please keep the code simple as possible for the benefit of maintainability. And if it makes the project overall smaller it likely is a very good change. :)
This is an interest query for a PR.
Basic idea: The program specifies the data of interest and the renderer feeds it - no exceptions.
Consequences: -1000 LOC or more, easier to maintain/extend renderer, no more obscure updating bugs in the (then no longer) hard to trace interna, more good stuff (I leave it an exercise to the attentive reader).
Program inputs would specify a data source and a property name.
Examples for data sources are:
material
object
camera
renderState
(I invented this one for all the free-floating stuff currently in local variables)Typical properties for each of them:
map
receiveShadow
projectionMatrix
fog
These bindings exist right now - they are hard-coded in the renderer.
Moving them into the program description, the renderer can process these bindings in a loop. More importantly, the data dependencies can be resolved on a by-case basis (there are eight, as detailed below), instead by hard-coded binding (there are dozens, and piling up with every new feature that is being added).
There exist two fundamental kinds of inputs: Compile time inputs, currently implemented via
#define
s emitted into the GLSL source, and run time inputs;uniform
s in case of GLSL (note that a software renderer may very well create and parametrize JavaScript functions by the same mechanism).A distinct case exists for every input type per data source = 8 cases. Some of them are currently not handled correctly, but I'm confident that I can make it happen.
There is currently a large amount of redundant code that just pumps data through the renderer. So how about teaching the renderer how to pump data (in general, once and for all) and have it scale?! Since built-in materials have a fixed program, which then knows exactly what it wants from the material (and the other data sources), all built-in stuff can also be removed - it would live on in very much condensed form in the descriptions of the built-in programs.
I have an implementation sketched out, but I wouldn't want to waste my time coding it down without sufficient interest in merging it. Please let me know what you think!