Closed myaaaaaaaaa closed 1 year ago
This is an interesting approach on paper – I hadn't thought about it until now, but I've seen a demo doing this with rasterization a few years ago.
A problem with this approach is that it doesn't work with dynamic objects, which is one of the main arguments for using raytraced GI in general. Having static and dynamic objects look the same is a big part of the appeal, so that dynamic objects never appear to be "floating" due to mismatched lighting. The current probe-based approach has broken rendering.
Also, the fact that you need UV2 may still be a problem for level designers, as many of them are relying on CSG nodes which don't generate UV2. Procedural level generation also makes this difficult to do (though not impossible). SDFGI or even VoxelGI would still fare better as a one-click soluton here.
That said, there are plans to implement raytracing extensions in the GPU lightmapper (as it'd be significantly faster), but it would remain optional. The compute-based path would remain available for GPUs that don't support raytracing.
whereas lightmaps can go down to extremely low resolutions with the only artifact being pixelation
Bicubic sampling can be used to hide pixelation with a small additional cost, but it's not reimplemented in 4.x yet. It also tends to make noise in lightmaps a bit less noticeable since it results in a softer appearance.
A problem with this approach is that it doesn't work with dynamic objects, which is one of the main arguments for using raytraced GI in general. Having static and dynamic objects look the same is a big part of the appeal, so that dynamic objects never appear to be "floating" due to mismatched lighting. The current probe-based approach has broken rendering.
Technically speaking, it would still work - lightmaps being baked in realtime opens up the opportunity for dynamic objects to have lightmaps applied as well. Of course, whether the results would be acceptable remains to be seen...
Also, the fact that you need UV2 may still be a problem for level designers, as many of them are relying on CSG nodes which don't generate UV2. Procedural level generation also makes this difficult to do (though not impossible). SDFGI or even VoxelGI would still fare better as a one-click soluton here.
Agreed, the need for lightmap UVs is the biggest drawback of this method. My hope is that the community would be able to work together to overcome it through add-ons, as UVs can be accessed from GDScript which allows for easy experimentation with automatic UV packing without having to involve the core engine developers. Better UV add-ons for Godot would also benefit procedurally generated levels by allowing them to use traditional textures, and wouldn't be limited to lightmaps.
demo doing this with rasterization
The result is a lightmapping solution where the lights are stationary, but most attributes are dynamic and can vary, such as color, intensity, etc.
Sounds awfully like Quake and Half Life with their blinking lights
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
Broadly speaking, this would be implemented as a double-buffered realtime lightmap baker that runs parallel to the main rendering pipeline.
After further investigation, the most important component of this proposal (the double-buffered lightmap baking, which allows for infinite bounces) is already how the GPU lightmapper is implemented.
I suppose that means this proposal can be simplified to just being about integrating the lightmapping module into core, and having it run along the main rendering pipeline.
That said, there are plans to implement raytracing extensions in the GPU lightmapper (as it'd be significantly faster), but it would remain optional. The compute-based path would remain available for GPUs that don't support raytracing.
Raytracing extensions could potentially even be separated out into another proposal, although I don't know if the current GPU lightmapper's acceleration structures are capable of updating in realtime (required for dynamic objects)
We were thinking more along the lines of something like GI 1.0 which has the advantage of scaling better in dynamic scenes. The trouble with dynamic lightmaps is the memory budget explodes very quickly, so it needs to be combined with a way to stream textures very fast and always requires a huge pool of memory. Lighting techniques like GI 1.0 and Lumen can provide the same quality of lighting by taking advantage of existing data and only caching the minimal set of data needed.
I think the most important part of your proposal is that it highlights the difference in quality between raytraced and non-raytraced GI.
@clayjohn Note that this proposal is intended to be an upgrade to static lightmaps, but with properties that make it competitive to other GI solutions.
The trouble with dynamic lightmaps is the memory budget explodes very quickly, so it needs to be combined with a way to stream textures very fast and always requires a huge pool of memory.
As stated in the proposal, the resolution of dynamic lightmaps can be programmatically changed at runtime, so they would always have a lower memory footprint for the same fidelity compared to static lightmaps.[^technically] [^technically]: Technically, due to the double buffering requirement, they can reach up to double the memory footprint of static lightmaps in very small scenes, but even then this can be compensated for with even the simplest algorithms that adjust resolution based on camera proximity.
Additionally, GI-only lightmaps have very low resolution requirements compared to standard lightmaps, and can be scaled down to have even lower compute and memory requirements than GI 1.0[^lowfreq] with acceptable results due to their resilience towards all forms of light leaking (see proposal and below). [^lowfreq]: GI 1.0 necessitates wasting probes on large, geometrically-simple surfaces in order to be able to handle scenes with complex geometry.
Lighting techniques like GI 1.0 and Lumen can provide the same quality of lighting by taking advantage of existing data and only caching the minimal set of data needed.
As stated in the paper itself in 6.1 Limitations, GI 1.0's approach to this results in it having difficulty receiving bounce lighting from objects outside the screen. This is due to it effectively mostly operating in screen space, unlike lightmaps which fully operate in world space.
On the other hand, dynamic lightmaps are capable of matching GI 1.0 in this manner by reducing the resolution of lightmaps attached to frustum/occlusion culled objects.[^simple] This way, they're able to keep the minimal amount of data needed to calculate the light contribution of offscreen surfaces, unlike GI 1.0 which completely tosses this data away. [^simple]: I would personally stick to algorithms based purely on camera proximity for simplicity's sake.
As stated in the proposal, the resolution of dynamic lightmaps can be programmatically changed at runtime, so they would always have a lower memory footprint for the same fidelity compared to static lightmaps.
Additionally,` GI-only lightmaps have very low resolution requirements compared to standard lightmaps, and can be scaled down to have even lower compute and memory requirements than GI 1.0 with acceptable results due to their resilience towards all forms of light leaking (see proposal and below).
I am very skeptical of this claim. Would be interesting to see if you know of an implementation of "dynamic lightmaps" that is this memory efficient. I don't know how you would balance light leaking and the large texel size that would be necessary to keep the memory footprint down. You can't have both, if you increase the texel size you will get light leaking.
As stated in the paper itself in 6.1 Limitations, GI 1.0's approach to this results in it having difficulty receiving bounce lighting from objects outside the screen. This is due to it effectively still operating in screen space, unlike lightmaps which operate in world space.
That section refers to reflectors outside of the screen, not lighting outside of the screen. I think lighting is still done in world space. It is definitely not a screen-space technique
It would be really cool to have something like this even if it is for diffuse GI only. The closest thing I have seen is some experiments by one of Unity's engineers https://twitter.com/raroni86/status/1535160052369215490 but I'm sure ideas like this have been experimented with in many engines.
As stated in the proposal, the resolution of dynamic lightmaps can be programmatically changed at runtime, so they would always have a lower memory footprint for the same fidelity compared to static lightmaps. Additionally,` GI-only lightmaps have very low resolution requirements compared to standard lightmaps, and can be scaled down to have even lower compute and memory requirements than GI 1.0 with acceptable results due to their resilience towards all forms of light leaking (see proposal and below).
I am very skeptical of this claim. Would be interesting to see if you know of an implementation of "dynamic lightmaps" that is this memory efficient. I don't know how you would balance light leaking and the large texel size that would be necessary to keep the memory footprint down. You can't have both, if you increase the texel size you will get light leaking.
Sorry for being unclear, I was referring specifically to how lightmaps are still somewhat usable (but obviously with nowhere near the quality of GI 1.0) even in the face of extremely large texel sizes, as they're generally more resilient to a larger variety of scenarios that would cause other forms of global illumination to leak.[^cellsize] [^cellsize]: For example, they're completely immune to light shining through walls even with excessively large texel sizes, unlike voxel-based methods such as VoxelGI and SDFGI which would encounter this when using equally large cell sizes.
I think the ability to scale down very far with okay-ish results is very important, as that would mean greater applicability to low-end devices. I don't believe GI 1.0 can scale down as far before light leaking causes too many problems.
As stated in the paper itself in 6.1 Limitations, GI 1.0's approach to this results in it having difficulty receiving bounce lighting from objects outside the screen. This is due to it effectively still operating in screen space, unlike lightmaps which operate in world space.
That section refers to reflectors outside of the screen, not lighting outside of the screen. I think lighting is still done in world space. It is definitely not a screen-space technique
The paper directly mentions "This can be detrimental to the visual fidelity of interior scenes in particular, where bounced lighting tends to dominate", so by "contributing reflector" they are referring to all forms of bounce lighting.
From my understanding, the "World Cache" exists to bin the screen space probes into world space cells, so that probes in the same cell can have their lighting data averaged and accumulated more easily. This would mean it ceases to receive lighting updates once it's outside the frustum or beyond occluded objects. If the world cache does persist offscreen lighting data (rather than just discarding it), I'm skeptical that this would be more effective than dynamic lightmaps.
The closest thing I have seen is some experiments by one of Unity's engineers
That looks great! He even has a demo for dynamic resolution adjustment.
The techniques he used are very sophisticated (especially the dynamic UV resizing and repacking), but I think Godot can benefit even with a very basic implementation.
@raroni Is this something you would be interested in playing around with if you have spare time? Godot has plenty of high quality example scenes that you can use to validate any GI experiments you may have thought of trying out :slightly_smiling_face:
See below for three such scenes, courtesy of @WickedInsignia
godotengine/godot#63374 godotengine/godot#74965 godotengine/godot#75440
@myaaaaaaaaa I'm glad you liked my prototype. While it isn't perfect, I definitely think something like that would be interesting to have in a game engine. Unfortunately, I do not currently have any extra time to pour into a Godot implementation/integration.
Describe the problem or limitation you are having in your project
Currently, the standard way of getting realistic shadows and global illumination is by baking static lightmaps during development, see below examples from a tutorial by Andrew Price
This comes with a number of problems, namely the extra cost of storage, wasted artist time and associated productivity loss[^1], and inability to respond to dynamic lighting situations such as time-of-day systems.
Describe the feature / enhancement and how it helps to overcome the problem or limitation
I propose the addition of dynamic lightmaps that bake in realtime using Vulkan raytracing. This would not only allow for fully dynamic lighting, but also result in lower memory usage yet higher quality compared to static lightmaps, as developers now have the opportunity to programmatically change the resolution of lightmaps at runtime. In static (or mostly-static) lighting scenarios, dynamic lightmaps can surpass even Lumen in quality, and opens up the opportunity for high quality lighting in procedurally generated levels.
Additionally, by using the dynamic lightmaps from the previous frame as a light source while baking the lightmaps in the current frame, bounce lighting can be accumulated over time similar to how Lumen's GI works. This effectively results in infinite bounces, with the only drawback being that lighting updates are no longer instant, a drawback shared by Lumen's GI.[^2]
It would be straightforward to provide an option of disabling direct lighting (shadows), turning this into purely a global illumination solution. This would allow for drastically lower lightmap resolutions due to the low frequency nature of GI, and allow it to be combined with other direct lighting techniques that respond more quickly to lighting changes, such as shadow maps or a hypothetical raytraced shadow pass. (This is also how Lumen overcomes the delayed lighting limitation of its GI component). Compared to SDFGI (godotengine/godot#39827), GI-only dynamic lightmaps would work regardless of distance, never leak light, and perform better[^3], but has the downside of requiring raytracing hardware[^4] and lightmap UVs.
I believe dynamic lightmaps are the most straightforward way to integrate raytracing into Godot, as well as being the most broadly applicable and immediately visible use case of raytracing that benefits the largest number of people. It can achieve Lumen-like quality while being able to scale down to the smallest integrated (raytracing-capable) GPUs[^5]. It can even indirectly benefit players who don't own raytracing-capable GPUs, as game developers who do have such GPUs would become more productive due to the instant feedback of dynamic lightmaps, and less time spent waiting for lightmaps to bake.
Finally, it lays the groundwork for other raytraced render passes (reflections, shadows, etc), which if they are ever implemented, would also benefit from being able to use dynamic lightmaps as a radiance cache, allowing rays to terminate with far fewer bounces than they normally would, resulting in massive speedups.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
Broadly speaking, this would be implemented as a double-buffered realtime lightmap baker that runs parallel to the main rendering pipeline.
Global settings applied to the whole scene/project:
Settings per dynamic lightmap:
Basic breakdown of a frame:
If this enhancement will not be used often, can it be worked around with a few lines of script?
High quality lighting can still be achieved by baking standard lightmaps. Their drawbacks are stated above.
Is there a reason why this should be core and not an add-on in the asset library?
Dynamic lightmaps can be made into an add-on, but raytracing APIs would need to be added to RenderingDevice first. Ideally, a zero-copy method of transferring data to the main rendering pipeline should also be added, but if this is not possible, any copying bottlenecks can be alleviated by only copying dynamic lightmaps every few frames for choppier (but not slower) lighting updates.
If done this way, I recommend simultaneously implementing raytraced render passes (reflections, shadows, etc) in a different add-on, so that any issues with the RenderingDevice raytracing APIs can be ironed out more quickly. This would also allow for experimentation with alternative Lumen-like systems, in cases where creating lightmap UVs would be impossible or prohibitively difficult.
[^1]: Lightmap baking is infamous for being a time sink among level designers, analogous to rendering in vfx, or compiling C++ in programming [^2]: In the video, you can see the GI takes a few frames to update whenever a light moves. Note that direct shadows update instantly as they are rendered separately from GI. [^3]: SDFGI has a limit to how far down it can scale before light leaking and limited distance makes the lighting values completely incorrect, whereas lightmaps can go down to extremely low resolutions with the only artifact being pixelation [^4]: This may not be true, as Godot does have a lightmapper based on compute shaders, see godotengine/godot#38386 [^5]: Dynamic lightmaps configured with less rays per frame would take longer to converge to being noise-free after lighting changes.