asny / three-d

2D/3D renderer - makes it simple to draw stuff across platforms (including web)
MIT License
1.33k stars 110 forks source link

Cascaded shadow map support #461

Open swiftcoder opened 5 months ago

swiftcoder commented 5 months ago

I'm adding cascaded shadow maps to my own project, and I'd be open to contribute an implementation back to three-d, but I'm not spotting a great way to plug new shadow map backends into the existing lighting system.

Since cascaded shadow maps require changes to both shadow map generation and sampling, we'd need a way to provide replacements for both Light.shader_source() and Light.generate_shadow_map(). The most strait forward is to add a ShadowMapper trait (name needs workshopping), but that would require Light to become Light<S> where S: ShadowMapper, which has knock-on effects up and down the API. Or maybe alternately Light could contain a shadow_mapper: Box<dyn ShadowMapper>. Or we could make the caller explicitly pass an Option<&ShadowMapper> into the two functions that require it...

Do you have a preferred approach here? Is adding fancier shadow algorithms a good fit with the goals of three-d?

asny commented 2 months ago

Sorry for the late reply 😬 Hope you're still up for the task, it sounds like a really nice addition! It's definitely aligned with the goals of three-d.

I think being able to swap out the shadow algorithm of each light is a bit overkill. If someone really wants to implement their own shadow algorithm and don't want to contribute to three-d, it's possible to implement a new Light type.

I think the best approach to add cascaded shadow map support in three-d is changing the existing DirectionalLight to support both normal and cascaded shadow maps internally. The only addition to the API as far as I can tell, is a function generate_cascaded_shadow_map that people can choose to call instead of generate_shadow_map. generate_cascaded_shadow_map should take the camera as well as any additional necessary parameters. Internally, the lighting calculations would be different whether or not generate_shadow_map or generate_cascaded_shadow_map was called last. Does that makes sense?

BonsaiDen commented 1 month ago

The way I did Cascading Shadow Maps (using Variance Shadow Maps) was to:

  1. Implement the Light trait for a new struct
  2. Provide a custom shader for calculating the light via the existing calculate_light() and multiplying it with the shadow sample from the cascade, this is a bit "ugly" since there is no global way to inject additional fragment source that shared across lights, so all functions need to postfix with the light ID in order to avoid complications, however the rest is relatively straight foward
  3. Have a method on the new DirectionCSMLight to compute the cascades
  4. In here we encounter a few more troubles:
    • We need to compute the individual cascade frustums and view projections, that's not too hard, however we cannot create a Camera directly from a view and the custom projection that is needed, so we need inject an additional uniform for the cascadeViewProjection
    • Now of course, this will not be picked up by the Geometries that we render into the cascades, so we stick those into a wrapper impl of the trait and do a lot of dirty string replace magic to get things working:
struct CascadeDepthGeometry<'a, T: Geometry> {
    inner: &'a T,
    cascade_matrix: Mat4
}

impl<'a, T: Geometry> Geometry for CascadeDepthGeometry<'a, T> {
    fn id(&self, required_attributes: FragmentAttributes) -> u16 {
        self.inner.id(required_attributes) | 1 << 14
    }

    fn aabb(&self) -> three_d::AxisAlignedBoundingBox {
        self.inner.aabb()
    }

    fn draw(
        &self,
        camera: &Camera,
        program: &Program,
        render_states: RenderStates,
        attributes: FragmentAttributes,
    ) {
        program.use_uniform("cascadeMatrix", self.cascade_matrix);
        self.inner.draw(camera, program, render_states, attributes);
    }

    fn vertex_shader_source(&self, required_attributes: FragmentAttributes) -> String {
        // Emulate GL_DEPTH_CLAMP
        let source = self.inner.vertex_shader_source(required_attributes);
        let mut patched = String::with_capacity(source.len());
        patched.push_str("uniform mat4 cascadeMatrix;\n");
        patched.push_str("out float cascadeDepth;\n");
        for l in source.lines() {
            if l.contains("uniform") {
                patched.push_str(l);

            } else {
                patched.push_str(&l.replace("viewProjection", "cascadeMatrix"));
            }
            if l.contains("gl_Position = ") || l.contains("gl_Position=") {
                patched.push('\n');
                patched.push_str("cascadeDepth = gl_Position.z / gl_Position.w;\n");
                patched.push_str("cascadeDepth = (gl_DepthRange.diff * cascadeDepth + gl_DepthRange.near + gl_DepthRange.far) * 0.5;\n");
                patched.push_str("gl_Position.z = 0.0;\n");
                // FIXME Need to make sure viewProjection is still used, otherwise Program.use_uniform will panic!
                patched.push_str("cascadeDepth *= viewProjection[3][3];\n");

            } else {
                patched.push('\n');
            }
        }
        log::info!("Vertex shader #{} patched for CSM", self.id(FragmentAttributes::NONE));
        patched
    }
}

In the end we needed to do this anyway though, as we need to emulate GL_DEPTH_CLAMP (not available in WebGL) to avoid artifacts during rendering (such as large object getting cut of by the frustum near plane)

in float cascadeDepth;

layout (location = 0) out vec2 outColor;

void main() {
    // Emulate GL_DEPTH_CLAMP
    float depth = clamp(cascadeDepth, 0.0, 1.0);
    gl_FragDepth = depth;

    // bias second moment based on viewing angle
    float dx = dFdx(depth);
    float dy = dFdy(depth);
    vec2 moments = vec2(depth, depth * depth);
    moments.y += 0.25 * (dx * dx + dy * dy);

    // Optimization for 2 moments proposed in
    // http://momentsingraphics.de/Media/I3D2015/MomentShadowMapping.pdf
    moments.y = 4.0 * (moments.x - moments.y);
    outColor = moments;
}

However currently when writing custom shaders that end up replace certain parts of their inner fragment / vertex shader, running into issues with the panic behaviour of Program::use_uniform() often requires some workarounds to ensure that the uniforms are still used but don't affect anything.

One additional complication I've run into is that Variance Shadow Mapping and other techniques greatly benefit from mip maps, however there is currently no way to limit the number of mip levels when creating textures and updating all 10+ levels for multiple cascades per frame is rather slow (hence why my implementation is only using two levels).

TL;DR; a list of small things that would be nice to have: