Open mrjustaguy opened 2 years ago
Not sure if this entirely relevant but: Larsson's Method - Exact Bounding Spheres by Iterative Octant Scan
Provides the tightest possible bounding sphere while doing it in the smallest possible time, I've been able to find.
If accuracy isn't much of an issue, and anywhere from 5-23% of excess is suitable, then Ritter's algorithm can be used instead, as it is substantially faster while also being 2 fixed steps: Ritter's Method - Non-Minimal Bounding Spheres
Also, @reduz should probably be mentioned as he is the one that is working on the GI feature.
Iterative Octant Scan should be very performant, according to the paper, especially considering that the Bounding Spheres would extremely rarely need to be updated, and only for Deformable Meshes, where realistically you can keep the Bounding Spheres if each bone is treated as a separate mesh in the voxelization step, as then rotating a bone wouldn't need recalculation of Spheres, so Only Scaling (and probably translation) would need to perform some form of recalculation, which for Scaling would be just scaling the radius (translation would need revoxelization possibly, for the bone and any bones attached to it, more investigation needed for translated bones)
I mean 80ms for 14m triangles is nothing when you're doing 99.99% of this during import anyhow, and smaller, more accurate spheres would improve quality and/or performance, depending on how GI is calculated from them
There doesn't seem to be fundamental flaws with SDFGI. What are the benefits of using a different system than optimizing SDFGI as planned after 4.0?
If we implemented De-Gridified Voxeled / Spheres, won't the first version be ill performing too?
It'd probably be ill performing too in the first version, but it should still be more consistent in terms of performance even when on the move (unlike SDFGI atm)
Also, the fundamental flaw that I see with SDFGI are the cascades. They're hard to work with, as each cascade will leak light at a different level, so building a level around them is difficult, as you have to work hard to find where things are leaking and how to make the leaks less obvious. This isn't an issue for Open environments, but it's a big issue for larger buildings and interiors, which will often be a part of such environments.
Maybe there's some way around that issue with having an option to do some prebaking for specific objects that need a more consistent GI so it could maybe be something that can be worked around..
This isn't an issue for Open environments, but it's a big issue for larger buildings and interiors, which will often be a part of such environments.
Maybe there's some way around that issue with having an option to do some prebaking for specific objects that need a more consistent GI so it could maybe be something that can be worked around..
It's possible to use VoxelGI and SDFGI in the same scene (although it has bugs right now). Since VoxelGI performance mainly depends on screen coverage, it will not use much GPU time when you stand far away from a building covered by VoxelGI.
Describe the project you are working on
A Massive world, with GI
Describe the problem or limitation you are having in your project
There are currently no Really Good Global GI methods. All of the current GI Methods have Major Drawbacks. 1) VoxelGI - Requires Baking, Isn't Global, so-so performance, can have better quality compared to SDFGI in some limited scenarios 2) SDFGI - Inherent issue with Cascades, that makes GI start leaking at higher distances, and also Poor Performance when changing Cascades due to having to re-voxelize, and also, Dynamic objects cannot contribute to GI, because it would be too expensive to re-voxelize them every frame. It's not a Bad GI by current standards, and even inspired this very broadly applicable solution 3) BakedGI - Requires Baking, Isn't Global, Dynamic objects don't contribute to GI, and takes plenty of Hard Disk Space.
Describe the feature / enhancement and how it helps to overcome the problem or limitation
GI similar to other methods, but instead of using Voxels as it's base, it'll turn Triangle Meshes into groups of Spheres (by this I mean Free floating Voxels, not bound by a grid). Why Spheres? Turning a Mesh into a Collection of Spheres allows for the mesh to be Rotated, Translated (and to some extent scaled) without having to go back to the drawing board like one has with Voxels. This means Objects would very rarely need to get into the "Voxelization" step. Furthermore, because unlike Voxels, these Spheres aren't bound to a grid system, Meshes can be rotated without changing their GI's view of them, because the Spheres are a Part of the Mesh, instead of the Mesh being splat into a Grid to fill it. Another reason to go Spheres is because when you are "Voxelizing" a mesh, the spheres will overlap between two neighboring "Voxels" as they have to contain the entire Voxel Box. This is useful as it helps better defines the Volume as you transform it.
Describe how your proposal will work, with code, pseudo-code, mock-ups, and/or diagrams
Ray-Sphere Intersection Base -> https://answers.unity.com/questions/62644/distance-between-a-ray-and-a-point.html
On Import Run a Voxelization run on the mesh that'll use this GI. Turn those voxels into Spheres, and check which triangles they've got inside of themselves. These Spheres have only position, and a shared Radius (it is shared only among other cells of the same mesh) Each bone of a Dynamic Mesh should be treated as if it were a separate Static mesh. Dynamic Objects will need to be Revoxelized at runtime if their bones are scaled or translated. Rotations don't need to be Revoxelized, and in case of a Bone Translation, only the bone that was Translated and the ones directly attached to it need to be Revoxelized. As most Animated Dynamic Meshes just have their bones rotating around, and voxels are assigned to bones, this will be fine for most scenarios without Revoxelization.
In the Voxelization step, if all triangles are only barely in a given cell, that cell can be removed, to improve both Quality and Performance (if max distance between every triangle is small, and the distance from center to the closest triangle is large)
Now to Shooting Rays. This can probably be done in a similar fashion to how current GI methods work. if Not, Here's what I'd do: 1) Do a Sphere Render Pass, to see which Spheres are affecting which pixels 2) All the Spheres that are Visible in the Render Pass start Shooting rays in the Direction of the avg Normals for the pixels they are a part of, Normals taken from the Normal Buffer. 3) Check AABBs for Ray Intersection between Shot rays and Objects 4) Do Ray Intersections for Shot rays and the intersected Objects' Spheres 5) Bounce a Given amount of time, before doing a ray cast at a light)
The Ray-Sphere Intersections should be done with GPU compute, but the CPU could be used to do that as there wouldn't be a need for too many rays and would maybe be in the millions of rays a frame. (Godot 3.4 GDScript implementation of Ray-Sphere Intersections got 100k Intersections done in about a frame on a single thread on an i3-10105f at 3.7 GHz) Not to mention, Bodies could actually store their Ray results with any Voxels and Lights that didn't move between frames (both the caster/bouncer and the resulting hit)
If this enhancement will not be used often, can it be worked around with a few lines of script?
No.
Is there a reason why this should be core and not an add-on in the asset library?
Improves GI, and rendering is core atm.