Open darksylinc opened 3 years ago
Quack quack
(voxelized version of the ogre2_demo example, which is the first step to get VCT lighting right)
Ok it's definitely working, you can see the reflections (without VCT, all you can see is the skybox, not even the floor is on the reflections):
And the GI contribution (see the green tint on the yellow duck):
Three things I'm noticing:
{ 1, 1, 1 }
is broken. This is definitely an Ogre bug
(*) This made me remember that VCT was thought mainly for "sun light" GI contribution, but it supports all types. To support HDR it would have to use RGBA16_FLOAT
targets for the voxel lighting, which is 2x as expensive (in VRAM cost) than the default RGBA8_UNORM
target.
We don't even support switching to RGBA16_FLOAT
because that's a lot of memory and there was no need (it's easy to support, just a few more lines of code). But if you care about simulation accuracy and you have a monster GPU with 16-24GB of VRAM, I guess you have the luxury of not caring.
But if you care about simulation accuracy and you have a monster GPU with 16-24GB of VRAM, I guess you have the luxury of not caring.
hmm that sounds a little too memory intensive. I would hold off on supporting HDR for now until there is a need for it.
The 12 lights were added to the ogre2 demo for testing a while back. Now I think about it, I wonder if we should just have a simple Cornell box environment to demo this feature :)
hmm that sounds a little too memory intensive. I would hold off on supporting HDR for now until there is a need for it.
OK without explanation that feels a little bit overblown. It's all about user settings and what they consider good enough.
We keep 4 voxels around (though we can reduce it to 1 if lights are never updated):
So that's 12 bytes per voxel.
If we go to HDR, we'd need:
So that's between 16 & 24 bytes per voxel.
A (medium-sized?) scene the user may want to use 1024x1024x32. Maybe less would be enough, maybe more. It depends on what the simulation expects.
So: 1024x1024x32 x (16|24) = between 512MB & 768MB of VRAM + mipmaps (and more if we turn on anisotropic, I can't remember how that was calculated). Mipmaps add a 1.15x overhead, so you actually need 588-883MB
If the user thinks, this is not enough, I need 4096x4096x64 then cost grows dramatically, between 16GB and 24GB (on a 24GB GPU congrats, you've ran out of memory, can't be done).
But a compromise 2048x2048x64: between 4GB and 6GB (+1.15x in mipmaps).
So how much memory you'll need depends on how much the user thinks he needs for his scene. If he's already thankful he has GI at all (i.e. vs having nothing) and considers 128x128x32 enough, then of course you need no monster GPU.
Note that it is perfectly valid to use one setting (e.g. 128x128x32) for real time preview on your laptop, and then go overboard with a different setting when you need accurate results in the simulation on a powerful workstation.
I see, thanks for the explanation. So sounds like we should add APIs (and SDF params) to allow users to specify voxel size and HDR. As for the default values, we probably should not go overboard so that it works on less powerful machines (128x128x32?) and have HDR off.
Note this ticket is for tracking my work. I'm the one implementing it.
The work can currently be found in matias-global-illumination branch
Desired behavior
Obtain realtime Global Illumination when using the Ogre2 engine
Ogre2 provides various GI methods, out of which VCT (Voxel Cone Tracing) is the most reliable and accurate one for simulations.
The class hierarchy is the following:
The reason for having
GlobalIlluminationBase
is that there may be multiple solutions implemented in the future; including raytracingWhat about render engines that have natural GI (e.g. OptiX)
It is unclear. Technically speaking
GlobalIlluminationBase
is an object where users can specify GI parameters. Users can create more than one if they wish to use different parameters, but only one can be active at the same time.Right now
GlobalIlluminationBase
only contains simple properties such asBounceCount
. Probably render engines with natural GI like ray and path tracers could move their bounce count settings to this class.Since it is likely that there will be a raytracing implementation in the future taking advantage of
VK_KHR_ray_tracing_pipeline
, it should be smart that raytracing-specific options should be centralized intoGlobalIlluminationBase
(or derived implementations if too specific), that includes raytracer/pathtracer engines like OptiX.This would provide users a familiar interface that can handle multiple engines; and avoid getting into the situtation where certain settings are in a completely different place when changing engines
Remaining tasks