Open mattdesl opened 9 years ago
This all sounds awesome. Agree with your points re: stackgl too, would especially like to run through some of the modules to reduce state switches/checks where possible.
If you'd like any input/assistance let me know :)
I'm chiming in to say that our desires here are very aligned. I adore building webgl tech stacks for various use cases. I have done a lot of experimentation with the low level APIs and like the stackgl crew have acquired a taste for mild sugar with heavy doses of composition.
I would recommend we begin the effort by laying out a canonical application spec that this "framework" will service. This gives a clearer goal of the target and also is more fun as there's a bit more of a ...product at completion.
Oculus' recent decision to only support will be a serious blow to webVR. A lot of people at Mozilla aren't too enthused; we don't like to ship APIs that only work on one platform. But there are other devices.
stackGL is great; but I'm not sure there's enough focus on docs/evangelism. Maybe there's lower level documentation on a per module basis; but I feel it's not clear what the common patterns are. If I were to start a project today, it's not clear what the common patterns or modules I would use are. Maybe some are very common or always used, while others are more niche. More integrated examples shows people how to get started.
Meanwhile, Three.js and Babylon have a lower barrier to entry IMO, so you get more people talking about them. There's no reason why stackGL can't both be have research explorations and be beginner friendly. I think stackGL's modularity gives it the advantage here; smaller well written modules allow for small research explorations, and allow higher level abstractions to be built on top.
If three.js' API could be implemented in stackGL, what would the benefit be?
I'm curious if preprocessing objects that follow a certain pattern or convention might prove to be worth while. We can profile calls against the GL context at runtime, I wonder if there's a way to close the feedback loop? What about having the user hint at objects that don't have dynamic properties or stay on screen, and packing them into an interleaved array of values, rather than objects that might have their own buffers and thus have to do more draw calls?
Just a brain dump.
Agree with many of the points above.
One of the reasons I think three.js is so well known is View Source though.
custom shaders are clunky to write, lots of magic under the hood
Do you use THREE.RawShaderMaterial
?
@marklundin
One of the reasons I think three.js is so well known is View Source though. What do you mean? That the sources are easy to read?
Sorry, that was a bit vague.
If we were to use transforms to transpile certain syntactical features does that not put an extra barrier up for someone wanting to view source on a transpiled demo?
That's where JS in general is going anyway. Even for THREE productions, they are minified etc. Sharing its Javascript source is transitionning from being passive to active. I see another problem though. It means that tools adapters would be needed or at least strongly advised to use glo. With all the hassle(special tooling dev/update) and problems(keep tools adapted to the framework etc).
Count me in. I'd like to see something where in a few lines I can plop some geometry into a scene, light it, and bam I've got a result. I think it's important to include the base lighting and shading models, and a simple data-structure driven geometry model. I think the more everything focuses on the simple data, with functional interfaces the more appealing a framework like this would be for me.
I love the idea of being able to drill the abstraction down to the raw GL. I'd like to be able to load some model data, set up some lights, and add a shading model. Then when I realize I want to do something more custom in the shader, easily be able to break down the abstraction and write some custom shader code to do what I want.
All of my more custom three.js stuff involves mostly ignoring the existing lighting abstraction if I'm doing my own custom stuff.
So for me scope would ideally include basic Phong/Lambert/etc. shading models, scene abstraction, and lighting models. I would splurge on scope and include more lighting rather than less, or at least a focused separating module core that could work with the base framework.
Glad there is a bit of interest in this.
@nickdesaulniers It's a pity for WebVR and will probably slow down its pace/interest, at least until a cross-platform solution comes around, or until OSX/Linux steps up their GPU game.
If three.js' API could be implemented in stackGL, what would the benefit be?
I don't think it would be possible without a very highly coupled and "frameworky" set of APIs, which is antithetical to stackgl.
@benaadams
Do you use THREE.RawShaderMaterial?
Yup, it is horribly clunky (sorry) trying to build, for example, a custom phong shader. You basically end up copy-pasting ShaderChunks without rhyme or reason, and turning on/off defines and flags (like lights
and USE_MAP
) until all the attributes and uniforms fall into place. See this for an example in practice.
Worst of all, the next version of ThreeJS breaks your custom shader, so you need to start over again.
Compare to this phong shader with glslify, which could be modularized further and is not lock-stepped to any framework version.
@marklundin
If we were to use transforms to transpile certain syntactical features does that not put an extra barrier up for someone wanting to view source on a transpiled demo?
I'm still not 100% sold on using Babel/etc to author it. But I agree with @mrpeu; JS is often transpiled/bundled/compressed in modern workflows. Also; I'm not trying to create another ThreeJS, and this kind of a framework (so much emphasis on npm) isn't very useful without Browserify/Webpack/etc.
Also, I'm exploring a trie-like scene graph with a memoized update model. It creates a more functional approach for updating the state of the scene. More of a personal plug of what I'm into right now, but it might be interesting to explore. It stops you from having to have to use dirty flag checking and all of the complexity that goes along with it. All you're doing is a quick === at the base of your trie structure to see if a node in your graph has been changed, and then recursively re-processing it if it has. The code then reads like you're calculating everything from scratch each time without the if statements, but then only recomputes things as needed because of the memoization. So then the interface would look like this:
updateShadingModel( getCurrentScene(), mesh, {
type: Phong,
color: [1,0,0]
})
Then during update:
previousScene === getCurrentScene()
>> false
So it starts walking up the graph and === checking all the nodes.
Probably the biggest potential issue with this is how much memory this sheds in a real time application, but most of the application state changes are going to be mutating individual matrices and arrays which wouldn't need to change the trie. It would only be for adding geometry, lights, or changing shading models or shader setup.
@mrpeu
I see another problem though. It means that tools adapters would be needed or at least strongly advised to use glo. With all the hassle(special tooling dev/update) and problems(keep tools adapted to the framework etc).
The user would need to install Browserify or Webpack for any real use of the framework. Not just because it is mainly intended to be consumed through npm (so users can receive versions), but also because it encourages growth in npm and doesn't come bundled with things like primitives or OBJ parsers.
This hypothetical framework makes a lot of opinions, and will probably shun some JS devs because of that. But hopefully it also contributes back to npm in the form of isolated modules, so that the next time somebody says "I need to build a new 3D engine," they will have a lot of ground to stand on, and there will be a lot more shared code between frameworks.
@mattdesl @mrpeu It's a fair point; tooling is an integral part of the modern workflow, and personally, the idea of swizzling and even operator overloading would be awesome as some form of transform.
There's a few upcoming proposals that might prove relevant for WebGL and should definitely be considered when designing a lib:
SIMD Immutable Data Structures Value Types which could might make operator overloading much feasible
Sorry is a bit of an aside: @mattdesl
Compare to this phong shader with glslify, which could be modularized further and is not lock-stepped to any framework version.
Worst of all, the next version of ThreeJS breaks your custom shader, so you need to start over again.
glslify
is a build preprocess, is it not, if you want to use something like glslify-optimize
as part of it? Why don't you make your shaders with glslify rather than ShaderChunks. If you can convert the output into a javascript string that's all THREE.RawShaderMaterial
cares about. The only real constraint in the shader code being you need to name some matrices viewMatrix
,modelViewMatrix
,projectionMatrix
,normalMatrix
and the position cameraPosition
if you want auto link up. (Assuming you are already using BufferGeometry
)
@benaadams I actually didn't realize there was a difference between RawShaderMaterial
and ShaderMaterial
, thanks for pointing that out. Does it also include light uniforms etc? Or are you on your own?
@mattdesl you are on your own with completely empty shaders; however if you add any of the standard uniforms to the shader source it will wire them up; so either stay away from the standard uniforms or use them depending. It only auto-wires up the global types, so matrices, fog
and lights
for example; and only if you include them in the source.
Also you do know you can change what's included in three.js
by using the build.js as part of your build and altering what's included by changing common.json and extras.json?
Absolutely agree, mainly on the shaders / textures that I think could be separated modules to be imported whenever needed to use.
ThreeJS is great as it's humanly readable, but because of that, it's bloated..
One of the reasons I think three.js is so well known is View Source though. ThreeJS is great as it's humanly readable...
Both Three.js and stackGL can either be obfuscated, or not. Can someone show an example where Three.js is more readable than stackGL, or at least where stackGL becomes unreadable? This would help with the design of glo.
I would like to add that I would greatly appreciate a serious focus on speed over newbie-friendliness. I think elegant solutions tend to be more instructive to new-comers anyway as it teaches them the real patterns of performant system design more-so than a slick high-level interface.
ps I'm sorry so many hyphens crept into this post...
I think the argument was not whether to build glo using webpack/browserify, but whether to force end users/developers into a specific toolchain or not.
A transform would allows syntactic sugar like vector component access and twizzling but still have a raw data underneath which should help with SIMD.
Personally I think any transform should be a secondary discussion.
Yup @marklundin it definitely should not be forced on the end-user. Made #3 for further discussion on that
I'm more curious is transformation can be employed as a sort of ahead of time optimization; if we have lots of static meshes in separate modules, and we recognize they have the same material-like shading, can we combine them ahead of time (combining buffers via degenerate triangles, for example) that way we can draw multiple geometries with 1 call.
For instance, there's quite a few classical compiler optimizations done for ahead of time compiled languages. I'm curious if there are opportunities for us to do the same, but instead of loop-invariant-code-motion and friends, geometry merging.
Potentially yes, and definifley interesting but I suspect these sort of optimisations would be a better suited further up the asset pipeline.
Personally, I thing the dev should have full control of the gl state. The lib should not try to cover for common mispractice.
I've developed http://vorg.github.io/pex/ for similar reasons as above so I think it would add to the discussion to say what went wrong / good as it stands betweet THREE (monolithic framework) and stackgl (micomodules)
Why:
Good:
Bad:
Next:
Hard things and questions to glo:
@vorg thanks for your thoughts. Plask sounds awesome (could it theoretically support any desktop OpenGL features?). I'd love to use it for installations/prints so I'd be happy to support it as a target (gotta figure out how to set up Plask first).
I agree with solid low-level GL wrappers for shader, texture, cube, FBO, etc and that's were I'm going to start. I also think those are the easiest to modularize, so Pex might benefit from the work I'm doing. See #4 for some early discussion on that.
I suspect my first iteration of all this will be pretty rough around the edges, and maybe not even usable for a real production. But hopefully in the process some crisp modules/shaders will come out of it that can benefit stackgl, pex, pixi, ThreeJS as well as any subsequent iterations of glo.
I think this is good. One thing I've been meaning to do is kill off gl-shader eventually and switch over to a command buffer based interface (sort of like how Vulkan does things). This kind of what I was getting at with the commutative rendering note that I wrote a while back.
General things that I would like to see come out of this:
There are also some things that I think would be good to avoid:
I haven't looked into Vulkan so I'm pretty green on how a command buffer interface would look in practice or how it would make gl-shader (or something similar) obsolete.
I'm in agreement though with all the stuff you listed. The main thing I want is a pipeline for multi-pass rendering, which includes lighting, shadows, and post-fx, and provides a clearer focus on gamedev and other artistic experiences.
I am also really hesitant on a full scene graph since there are so many ways of tackling it and it creates a lot of lock-in. They are great to prototype with, but I'd like to develop it independently of the render pipeline if such a thing is possible.
Any chance of being es6 friendly? Stackgl is firmly in the browserify/require() camp which is unnecessary workflow if already using import/export. Possibly jspm can converge the two worlds.
@backspaces fwiw, you can still use es6 imports with stackgl and babelify.
It should be ES6 friendly if you are using babel and a bundler that supports npm. It will be authored in ES5 for the time being, see #3 for discussion.
Sent from my iPhone
On May 25, 2015, at 7:09 PM, Owen Densmore notifications@github.com wrote:
Any chance of being es6 friendly? Stackgl is firmly in the browserify/require() camp which is unnecessary workflow if already using import/export. Possibly jspm can converge the two world.
— Reply to this email directly or view it on GitHub.
Can someone post a Gist showing es6 (babel and/or traceur) and a module loader (preferably one by-passing browserify/npm but OK if not possible) using basic git and npm development?
I've seen Guy Bedford posts saying that a module loader, possibly jspm, can import git/npm etc so I suspect it may be possible for stackgl to have a workflow without browserify, vastly simplifying development.
Is there something in using stackgl that requires more than simply importing it? I.e. does glslify require additional workflow fu?
I'm worried the project is painting itself into a corner. As wonderful as small modules are, and boy am I a believer, requiring complex workflow is a non-starter.
Sadly, ES6 does not simplify development. The main reason is modularity.
One of the goals of this project is to produce new modules that are independent of glo
. This way; whether or not the framework "succeeds," at least it will have contributed a lot of new features to npm that can be used in other projects (like ThreeJS, Pex, and unrelated fields). Since starting this project, dozens of modules have already been spawned and split off from its codebase:
For these tiny modules, transpiling adds a lot of overhead when testing, publishing and consuming the module. If the source of glo
is written in ES5, it is easier to just split the code out and publish it immediately.
Also, bear in mind that users are expected to interact with npm and modules to build an application with glo
. I am not planning on bringing ray intersection or OBJ model parsing into this framework, since those features can easily live independently on npm.
Also, most of the shaders are encouraged to be made with glslify (which needs a build step), to take advantage of shared GLSL components. Example
I don't have a gist, but most of my recent projects are in ES6 even though most of the modules I'm importing are ES5. The build step requires two lines and leads to a very fast development workflow. More info here.
Right now this repo is just brainstorming ideas about creating a new WebGL framework.
This might be my new pet project, or maybe something more collaborative, or maybe nothing will happen and we'll just continue getting shit done with ThreeJS. :smile:
what's wrong with ThreeJS?
It's awesome! But my main gripes:
what about StackGL?
Also awesome, and not mutually exclusive to a new framework. Tons of modules will be useful, especially glslify. And many parts of the framework can be modularized and "given back" to stackgl ecosystem. However:
gl.getParameter
ndarray-ops
etc which may not be neededso what about this imaginary new framework?
Let's call it
glo
for now (name pending) since light is going to be important. Nothing is clear but here are a few ideas:few notes on implementation / structure
big questions
Well.. there's a lot of things to figure out. But let's get the obvious questions out of the way: