Clustered shading is a technique that allows for efficient rendering of thousands of dynamic lights in 3D perspective games. It can be integrated into both forward and deferred rendering pipelines with minimal intrusion.
(top) 512 lights (bottom) 1024 lights | both scenes rendered using clustered deferred on an Intel CometLake-H GT2 iGPU @60 fps
The traditional method for dynamic lighting is to loop over every light in the scene to shade a single fragment. This is a huge performance limitation as there can be millions of fragments to shade on a modern display.
What if we could just loop over the lights we know will affect a given fragment? Enter clustered shading.
[!IMPORTANT] Clustered shading divides the view frustum into 3D blocks (clusters) and assigns lights to each based on the light's influence. If a light is too far away, it is not visible to the cluster. Then in the shading step, a fragment retrieves the light list for the cluster it's in. This increases efficiency by only considering lights that are very likely to affect the fragment.
Clustered shading can be thought of as the natural evolution to traditional dynamic lighting. It's not a super well known technique. Since its introduction in 2012, clustered shading has mostly stayed in the realm of research papers and behind the doors of big game studios. My goal is to present a simple implementation with clear reasoning behind every decision. Something that might fit in a LearnOpenGL article.
We'll be using OpenGL 4.3 and C++. I'll assume you have working knowledge of both.
[!TIP] If you are viewing this in dark mode on GitHub, I recommend trying out light mode high contrast for easier reading. Text can appear blurry to the eye in dark mode.
The view frustum and camera position (center of projection) form a pyramid like shape
The definition of the view frustum is the space between the zNear
and zFar
planes. This is the part of the world the camera can "see". Shading is only done
on fragments that end up in the frustum.
Our goal is to divide this volume into a 3D grid of clusters. We'll define the clusters in view space so they are always relative to where the camera is.
uniform division (left) and exponential division (right)
There are two main ways to divide the frustum along the depth: uniform and exponential division.
The exponential division lets us cover the same area with fewer divisions. And we generally don't care if this causes a lot of lights to be assigned to those far out clusters. Because less of an object appears on the screen the further out in perspective projection, there are fewer fragments to shade.
So exponential division it is. We'll use the equation below which closely matches the image on the right.
\LARGE
Z=\text{Near}_z\left(\frac{\text{Far}_z}{\text{Near}_z}\right)\Huge^{\frac{\text{slice}}{numslices}}
This equation gives us the positive Z depth from the camera a slice should be. Where $Z$ is some value between the near and far planes.
In addition to slicing the frustum along the depth, we also need to divide on the xy axis. What subdivision scheme to use is up to you. If your near and far planes are very far apart, you'll want more depth slices.
A good place to start is 16x9x24 (x, y, z-depth) which is what DOOM 2016 uses. I personally use 12x12x24 to show the division scheme can be anything you choose.
The simplest way to represent the shape of the clusters is an AABB (Axis Aligned Bounding Box). Unfortunately, a side effect is that the AABBs must overlap to cover the frustum shape. This image shows that.
The points used to create the AABB cause overlapping boundaries
This still gives good results performance wise. You could choose a more accurate shape and improve shading time as lights are better assigned to their clusters. But what you're ultimately doing is trading faster shading for slower culling.
In fact, I'll make a bold claim: This algorithm does not benefit from more accurate cluster shapes or distributions of clusters. The reason being, there is not a lot of room for optimization without complicating and slowing down the cluster creation or culling step.
We use a compute shader to build the cluster grid. This is all fully functioning code, taken straight from my OpenGL playground project.
I encourage you to stop here and adapt this code into your game or engine and study it! See if you can spot the exponential formula from earlier.
We divide the screen into tiles and convert the points to view space sitting on the camera near plane. This essentially leaves us with a divided near plane. Then for each min and max point of a tile on the near plane, we draw a line from the origin through that point and intersect it with the cluster's near and far planes. The intersection points together form the bound of the AABB.
[!NOTE] screenDimensions is more accurately thought of as the dimensions of glViewport, under which, lighting will be done.
And a few notes on the C++ side:
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
ensures the writes to the SSBO (Shader Storage Buffer Object) by the compute shader are visible to the next shader.
alignas(16)
is used to correctly match the C++ struct memory layout with how the SSBO expects it.
According to the OpenGL Spec, the std430
memory layout requires
the base alignment of an array of structures to be the base alignment of the largest member in the structure.
struct alignas(16) Cluster
{
glm::vec4 minPoint; // 16 bytes
glm::vec4 maxPoint; // 16 bytes
unsigned int count; // 4 bytes
unsigned int lightIndices[100]; // 400 bytes
};
The largest element in this struct is a vec4 of 16 bytes. Since we are storing an array of Cluster, the struct should have its memory aligned
to 16 bytes. If you don't know what memory alignment is, don't worry.
We basically need to add padding bytes to make the total struct size a multiple of 16 bytes.
We can manually add some dummy variables or let the compiler handle it with alignas
.
If you are not storing an array of structures in
std430
, and as long as you stay away from vec3, you probably don't need to worry about alignment.
Our goal now is to cull the lights by assigning them to clusters based on the light influence. The most common type of light used in games is the point light. A point light has a position and radius which define a sphere of influence.
We brute force test every light against every cluster. If there is an intersection between the sphere and AABB, the light is visible to the cluster, and it is appended to the cluster's local list.
Let's look at the cluster struct.
struct Cluster
{
vec4 minPoint; // min point of AABB in view space
vec4 maxPoint; // max point of AABB in view space
uint count;
uint lightIndices[100]; // elements point directly to global light list
};
The min and max define the AABB of this cluster like before.
lightIndices
contains the lights visible to this cluster. We hardcode a max of 100 lights visible to a cluster at any time. You'll see this number used a few times elsewhere.
If you want to increase the number, make sure to change it everywhere.
count
keeps tracks of how many lights are visible. It tells how much to read from the lightIndices
array.
We'll use another compute shader to cull the lights. Compute shaders are just so awesome because they are general purpose. They are great for parallel tasks. In our case, testing intersection of thousands of lights against thousands of clusters.
Let's have each compute shader thread process a single cluster.
Now let's update the C++ code. Mainly to create the lights. How exactly this is done is different for everyone. But the following suits the basic purpose.
The compute shader has 128 "threads" per workgroup. We dispatch 27 workgroups for a total of 3456 threads. This is to fit the design of each thread processing a single cluster. Remember we have 12x12x24 = 3456 clusters. If you change anything, make sure to change your dispatch to match the total thread count with the number of clusters.
Also, since each thread processes its own cluster and writes to its own part of the SSBO memory, we don't need to use any shared memory or atomic operations! This keeps the compute shader as parallel as possible.
Now that we have built our cluster grid and assigned lights to clusters, we can finally consume this data.
We calculate the cluster a fragment is in, retrieve the light list for that cluster, and do cool lighting. This is basically reversing the calculations in the cluster compute shader to solve for the xyz indexes of a cluster.
Your lighting shader should look something like this
#version 430 core
// PointLight and Cluster struct definitions
//...
// bind to light and cluster ssbo
// same as cull compute shader
uniform float zNear;
uniform float zFar;
uniform uvec3 gridSize;
uniform uvec2 screenDimensions;
out vec4 FragColor;
void main()
{
//view space position of a fragment. Replace with your implementation
vec3 FragPos = texture(gPosition, TexCoords).rgb;
// Locating which cluster this fragment is part of
uint zTile = uint((log(abs(FragPos.z) / zNear) * gridSize.z) / log(zFar / zNear));
vec2 tileSize = screenDimensions / gridSize.xy;
uvec3 tile = uvec3(gl_FragCoord.xy / tileSize, zTile);
uint tileIndex =
tile.x + (tile.y * gridSize.x) + (tile.z * gridSize.x * gridSize.y);
uint lightCount = clusters[tileIndex].count;
for (int i = 0; i < lightCount; ++i)
{
uint lightIndex = clusters[tileIndex].lightIndices[i];
PointLight light = pointLight[lightIndex];
// do cool lighting
}
}
Here FragPos is the view space position of the fragment.
The absolute value of FragPos.z
gives us the positive Z depth of the fragment from the camera.
Remember, that's exactly the left hand side of the exponential equation from earlier.
Solving that earlier equation for the slice results in the z index of the cluster.
uint zTile = uint((log(abs(FragPos.z) / zNear) * gridSize.z) / log(zFar / zNear));
Finding the xy index of the cluster is very simple. We have the screen coordinates of the fragment fromgl_FragCoord.xy
, we just need to divide by
the tileSize. Again, this is the reverse of what the cluster compute shader does.
A common problem is flickering artifacts. This could be either:
Your light is affecting fragments outside its defined radius. This causes uneven lighting. Try adding a range check to your attenuation.
There are too many lights visible to a single cluster. Remember we hardcoded a max of 100 lights per cluster at any time. If this limit is hit, further intersecting lights will be ignored, and their assignment will become unpredictable. This can happen at further out clusters, since the exponential division causes those clusters to be very large.
Solution: Increase the light limit. The only cost is more GPU memory. You can also add a check in your lighting shader to output a warning color.
uint lightCount = clusters[tileIndex].count;
if (lightCount > 95) {
//getting close to limit. Output red color and dip
FragColor = vec4(1.0f, 0.0f, 0.0f, 1.0f);
return;
}
The following benchmarks were measured using glFinish()
and regular C++ clocks on
my linux machine using an Intel CometLake-H GT2. I found the integrated gpu results were more in line with what I expected.
It also shows the competitiveness of the algorithm on low-end hardware.
The scene uses cluster shading with deferred rendering without any optimizations like frustum culling.
Building Clusters | Light Assignment | Shading | |
---|---|---|---|
512 lights (13.0f) | 0.28 ms | 0.95 ms | 5.23 ms |
1,024 lights (7.0f) | 0.27 ms | 1.50 ms | 3.71 ms |
2,048 lights (3.0f) | 0.42 ms | 2.61 ms | 2.84 ms |
4,096 lights (2.0f) | 0.29 ms | 5.15 ms | 3.28 ms |
The benchmarks show constructing the cluster grid takes constant time, while shading perf is largely affected by light radius. However, a bottleneck starts to appear in assigning lights to clusters. This makes sense since we are brute force testing every light against every cluster. We need some way to reduce the number of lights being tested.
One way is to build a BVH (Bounding Volume Hierarchy) over the lights and traverse it in the culling step. This can produce good results, but IMO it overcomplicates things. Clustered shading is already an optimization technique, and I have doubts about spiraling into a rabbit hole of optimizing the optimizers.
The easiest solution here is to frustum cull the lights and update the light SSBO every frame. Thus, we only test lights that are in the view frustum. Frustum culling is fast and already standard in many games.
Questions, typos, or corrections? Feel free to open an issue or pull request!