Open vorg opened 2 years ago
The way it was solved in ECS nodes what that RenderTexture and CameraSystem were parent and duplicated per viewport with their own Render systems below. I could then i have PBR Renderer in left viewport and Basic Renderer in the right viewport.
Currently even if i make two camera, two viewports and use tags to have PBR entities on the left and once again cloned tagged entities with unlit=true to render on the right the helper system would need to draw after those two, twice and "on top of the viewports". How would depth buffer be shared? Or what about combining deferred PBR with forward helpers renderer.
One idea that I wanted to do in Nodes but never had time to do was to decouple Renderer from materials/techniques. So I could have rendering technique "providers" (PBR, ThickLines, UnlitWithShadows) upstream or as inputs and then RenderExecutionSystem that would use entities and appropriate rendering technique providers to draw in the current viewport.
Current pex-renderer@3 systems.renderer draws all active cameras one by one (hoping they have different viewports) and maybe it should be actually upside down that the camera view is calling a renderer 🤔...
Something something render graphs
Maybe Related
ThreeJS manually calls renderer.render(scene, camera) 3x https://threejs.org/examples/webgl_multiple_views.html
The need for render graph is there even before multiple views. Not sure how to avoid generalized graph library not different from Nodes themselves. And that's before adding multiple views. Main challenge is still having a way to have both PBR Renderer and ScreenSpaceLineRenderer (and ParticlesRenderer and SDFRenderer) contribute to e.g. same shadowmap.
The way OurMachine was doing it is to specify attachment points where more passes can be added before final graph execution. In the graph below it would be GBuffer Pass
and Shadowmap Pass
and Depth Prepass
.
Green - passes from graph / rendering algorithm Purple - passed from systems Blue - textures
Graph made in knotend
Would api like that be acceptable? That doesn't even touch RenderGraph yet. Just if you want to start customizing things or have advanced use case things will get "manual" pretty fast. I wonder how much should that be hidden? We could even remove world.addSystem
/world.update
completely.
// can't just automatically render all the systems as we have two views
// world.update();
geometrySys.update()
transformSys.update()
cameraSys.update()
skyboxSys.update()
//draw left side, debug view
//no clue how to pass camera here as in Nodes ECS
//we pass entities manually and therefore we can filter / select camera before entities list reaches renderer
view1.draw(() => {
rendererSys.update({ debugRender: 'directLightingOnly' })
helperSys.update()
})
//draw right side
view2.draw(() => {
rendererSys.update()
})
It kind of leans to conclusion that there is no world and do just pass entities list around for maximum flexibilty and compatibility in Nodes.
const entities = []
entities.push({ ... })
geometrySys.update({ entities })
transformSys.update({ entities })
Or you keep the world
and pass it instead of entities list. Even though there is nothing more there ATM.
Actual working example
const view1 = createView([0, 0, 0.5, 1]);
const view2 = createView([0.5, 0, 0.5, 1]);
...
geometrySys.update(entities);
transformSys.update(entities);
skyboxSys.update(entities);
//draw left side, debug view
view1.draw((view) => {
const aspect = view.viewport[2] / view.viewport[3];
entities
.filter((e) => e.camera)
.forEach((e) => {
e.camera.aspect = aspect;
e.camera.dirty = true;
});
cameraSys.update(entities);
rendererSys.update(entities);
helperSys.update(entities);
});
//draw right side
view2.draw((view) => {
const aspect = view.viewport[2] / view.viewport[3];
entities
.filter((e) => e.camera)
.forEach((e) => {
e.camera.aspect = aspect;
e.camera.dirty = true;
});
cameraSys.update(entities);
rendererSys.update(entities);
});
This is very much library approach way more than a framework but i guess it's a good thing?
Could that be abstracted in a view system then?
Conclusion on render graphs research:
rg.*
nodes and gl.ResourceCache
nodes, add pass dependencies and pass runner (aka RenderGraph) and start small, e.g. shadow mapping, or split screen camera and single pass postrpro)The biggest challenges remain the same
Started new issue about Render Graphs https://github.com/pex-gl/pex-renderer/issues/315
Some of the examples use multiple viewports / cameras. In pex-renderer@2 there was one main renderer so it could re-draw the same scene 3x-4x while maintaining the order of shadowmaps update / main scene render and helpers render.
With new system based approach i have separate systems.renderer and systems.helper that draw their own stuff. And more to come (particles, thick likes, etc). How would multiple viewports work?