Open mvaligursky opened 3 years ago
One major issue for forward-renderer approach to lights is that light-related shader code is added to materials, and one huge bottleneck with this approach is when enabling/disabling lights - leads to all affected materials shader revalidation and recompilation, which is extremely slow.
I guess with shadow atlas and clustered version, this will not be an issue anymore?
That's correct, adding / removing lights will not rebuild the shaders when fully integrated (it does currently). And also you only pay for the lights that are "nearby" instead of all, for both static and dynamic lights.
Currently, when creating lights with shadows using atlases, with profiler engine, it does not add/remove VRAM usage from for app.stats.vram.texShadow
.
Also, adding new lights, it seems to generate new materials, but might not be removing old ones. app.stats.shaders.materialShaders
.
Currently, when creating lights with shadows using atlases, with profiler engine, it does not add/remove VRAM usage from for
app.stats.vram.texShadow
I added console.log(app.stats.vram.texShadow);
to clustered-spot-shadows example update loop .. and when I change the shadow atlas resolution using the slider, it updates the allocated size. How do you reproduce the issue? Internally, a normal shadow map is allocated, and that already works .. so this works as well.
Note that only a single atlas is used .. there is no per-light allocation taking place. If you set atlas to be 4k x 4k, it allocates it ones and then just subdivides for the visible lights that need shadows during the frame.
Also, adding new lights, it seems to generate new materials, but might not be removing old ones
I think engine keeps all shaders around to avoid compilation should the same shader be needed at a later stage. When clustered lights are fully integrated, it will not create new shaders when lights are added though, this has not been done.
We use clustered lighting in a large enough scene, and we have many lights intersecting. Our case: 13 spotlights, one direct light. Scene: a large building (71x170) with loads of smaller objects inside. Cell size: 4x1x8, which leads to a nice uniform subdivision of a scene.
The problem we are facing is that lights are assigned to a cell based on their AABBs intersections. For spotlight this is somewhat hard, taking into account that spotlights are not square, but somewhat conical with a spherical front.
This leads to many lights assigned to a single cell, which easily hits maxLightsPerCell. Changing cell size to a smaller size does not help as lights are still will be assigned to cells based on AABBs intersections.
We've implemented very rough sorting, where if a cell hits the limit, we sort lights based on their aabb center distance to cell's center. Now we can reduce maxLightsPerCell to a reasonable value (6 in our case) without visual bugs of lights being cut. Currently, we have to increase maxLightsPerCell to 12 to ensure the same visual results, which reduces performance significantly.
So to improve on this a couple of things can be done:
- Ensure that lights have their shape represented not by AABB but by relevant shape, so a more appropriate intersection is calculated. This will benefit everyone.
- Add conditional sorting to cells that hit the limit, which will allow developers to still use lower values of maxLightsPerCell and reduce lights cutoff artefacts as much as possible.
Better options now that WebGPU will be more mainstsream:
This is the issue to track progress for the clustered lighting implementation in the engine. Here are the most important steps:
Optional features:
Other related PRs
Public release after rounds of beta testing: https://github.com/playcanvas/engine/pull/4586