Closed trs79 closed 6 years ago
Hey there,
I already answered your question in the comments of my blog, but I'll also post it here just in case:
Unfortunately there is no code in BakingLab for handling dynamic meshes: it only shows how to bake 2D lightmaps for suitable static geometry. Each texel of the lightmap bakes the incoming lighting for a hemisphere surrounding the surface normal of the mesh, which means that the information is only useful if being applied to that particular surface. For dynamic meshes, you typically want to bake a set of spherical probes (containing lighting from all directions, and not just on a hemisphere) throughout the entire area that your meshes might be moving. A simple way to do this is to place all of probes within cells of a 3D grid that surrounds your scene, which makes it simple to figure out which probe(s) to sample from, and also makes it simple to interpolate between neighboring probes. It's also possible to use less regular representations that are more sparse in areas with less lighting/geometry complexity, or that rely on hand-placed probes. This can save you memory and/or baking time, but can make probe lookups and interpolation more costly. Either way baking the probe itself is very similar to the code in the sample for baking a hemisphere, with the major difference being that you need to shoot rays in all directions and adjust your monte carlo weights accordingly. You also have to choose a basis that can represent lighting on a sphere, for instance spherical harmonics or a set of SG's oriented about the unit sphere. Part 5 of the article talks a bit about working with these representations, and the tradeoffs involved.
Thank you for the response and information! Please forgive my ignorance, but could you explain how the monte carlo weights would need to be changed when shooting the rays in all directions?
Sure! So, let's take the case of integrating your lighting onto spherical harmonics. For a lightmap texel, you would want to integrate over the hemisphere. For monte carlo integration, you could use sample directions that are uniformly distributed about the hemisphere. With this distribution, the probability of each sample direction is equal to 1 / SurfaceAreaOfHemipshere, which is 1 / 2 Pi. This means that the final monte carlo weighting factor is (2 Pi) / NumSamples. For baking a probe, you would instead want to uniformly shoot rays about the full sphere. With that sampling scheme, your probability is 1 / SurfaceAreaOfSphere, so the final sampling weight is (4 * Pi) / NumSamples.
Interesting, that makes sense. Thanks for the links to the source code, that makes it less overwhelming when jumping in and trying to figure things out. One more question if you don't mind, I'm assuming that the shader to light a dynamic mesh would be essentially the same as the one shading the static geometry? The main difference being the dynamic mesh shader would sample a texture that contains Spherical guassians representing an entire sphere instead of a hemisphere? Thanks!
You're welcome! And yes: the shader is largely the same for computing reflected lighting from lightmaps or probes, at least once you've actually sampled/interpolated the probe data. For SG's and SH the code for evaluating the reflected lighting is exactly the same, since those representations don't really "care" whether you're storing lighting on the hemisphere or full sphere. In our engine at work we generally bake 5 SG's oriented about the hemisphere for our lightmaps, and 9 SG's oriented about the sphere for our probe grids. This gives roughly equal quality between the static and dynamic meshes.
Feel free to ask any other questions if you run into trouble, I'm be happy to help.
Thanks again! I'll close this issue for now but post any further questions here should they arise.
Hi,
I've really enjoyed reading your blog posts about spherical gaussians and learning more about radiosity lightmaps. Is there any sample code in the baking lab that would illustrate how to light dynamic meshes? Thanks!