matt77hias / MAGE-v0

🧙 MAGE v0
https://matt77hias.github.io/MAGE-v0-Doc
GNU General Public License v3.0
266 stars 20 forks source link

Artifacts in omni light shadow mapping #37

Closed matt77hias closed 7 years ago

matt77hias commented 7 years ago

Spotlight

My spotlight's intensity is cut off at a distance of 3 and at an angle of $\pi/4$ radians (umbra angle). The corresponding light camera has a near plane at a distance of 0.1, a far plane at a distance of 3, an aspect ratio of 1 and a vertical/horizontal FOV of $\pi/2$. The spotlight is positioned somewhere above the tree and faces downward to the floor.

The shadow map of my spotlight has a resolution of 512x512. I use the following DXGI formats:

My shadow factor is calculated in HLSL as:

/**
 Calculates the shadow factor.

 @pre           @a shadow_maps must contain a shadow map at index @a index.
 @param[in]     pcf_sampler
                The PCF sampler comparison state.
 @param[in]     shadow_maps
                The array of shadow maps.
 @param[in]     index
                The index into the array of shadow maps.
 @param[in]     p_proj
                The hit position in light projection space coordinates.
 */
float ShadowFactor(SamplerComparisonState pcf_sampler,
    Texture2DArray< float > shadow_maps, uint index,
    float4 p_proj) {

    const float  inv_w  = 1.0f / p_proj.w;
    const float3 p_ndc  = p_proj.xyz * inv_w;
    const float3 loc    = float3(NDCtoUV(p_ndc.xy), index);

    return shadow_maps.SampleCmpLevelZero(pcf_sampler, loc, p_ndc.z);
}

Here, the hit position in light projection space coordinates is calculated as follows from the hit position in camera view space coordinates (shading space):

const float4 p_proj = mul(float4(p, 1.0f), light.cview_to_lprojection);

I obtain the following images after visualizing the shadow factor (this is not the light contribution of the spotlight) (and some linear fog as well):

enter image description here enter image description here

This seems to result in the correct behavior (see the shadows of the leaves, pillars and curtains). The bright border on the first image starts at the position of the spotlight and is due to the near plane distance of 0.1. This border will completely vanish after multiplying the shadow factor with the spotlight's contribution.

Omni light

My omni light's intensity is cut off at a distance of 3. The corresponding light camera has a near plane at a distance of 0.1, a far plane at a distance of 3, an aspect ratio of 1 and a vertical/horizontal FOV of $\pi/2$. This means that one of the six light cameras (after applying some rotations), is completely identical to the spotlight camera above. I double checked this in the code:

Frame 1: world_to_lprojection (spotlight):

1.16532457,  0.000000000, 0.000000000, 0.000000000
0.000000000, -2.18211127e-07, -1.03448260, -0.999999881
0.000000000, 1.83048737, -1.23319950e-07, -1.19209290e-07
0.000000000, 4.36422255e-07, 1.96551692, 1.99999976

world_to_lprojection (fourth omni light camera):

1.16532457, 0.000000000, 0.000000000, 0.000000000
0.000000000, -2.18211127e-07, -1.03448260, -0.999999881
0.000000000, 1.83048737, -1.23319950e-07, -1.19209290e-07
0.000000000, 4.36422255e-07, 1.96551692, 1.99999976

world_to_lprojection represents the world-to-light-projection transformation matrix (world-to-camera-view-to-world-to-light-view-to-light-projection). I go to camera view first to mimic the same multiplications between my depth passes and shading passes to reduce z-fighting (although, this does not result in visual differences so far).

So apart from the fact that a spotlight corresponds to one DSV (and is part of one SRV to a Texture2DArray for all spotlight shadow maps) and an omni light corresponds to six DSVs (and is part of one SRV to a Texture2DCubeArray for all omni light shadow cube maps), the shadow map generation is pretty much the same for both types of lights.

The shadow map of my omni light has a resolution of 512x512. I use the following DXGI formats:

My shadow factor is calculated in HLSL as:

/**
 Calculates the shadow factor.

 @pre           @a shadow_maps must contain a shadow cube map at index @a index.
 @param[in]     pcf_sampler
                The PCF sampler comparison state.
 @param[in]     shadow_maps
                The array of shadow cube maps.
 @param[in]     index
                The index into the array of shadow cube maps.
 @param[in]     p_view
                The hit position in light view space coordinates.
 @param[in]     projection_values
                The projection values [view_projection22, view_projection32].
 */
float ShadowFactor(SamplerComparisonState pcf_sampler, 
    TextureCubeArray< float > shadow_maps, uint index,
    float3 p_view, float2 projection_values) {

    const float p_view_z = Max(abs(p_view));
    const float p_ndc_z  = ViewZtoNDCZ(p_view_z, projection_values);
    const float4 loc     = float4(p_view, index);

    return shadow_maps.SampleCmpLevelZero(pcf_sampler, loc, p_ndc_z);
}

Here, the hit position in light view space coordinates is calculated as follows from the hit position in camera view space coordinates (shading space):

const float3 p_view = mul(float4(p, 1.0f), light.cview_to_lview).xyz;

The conversion of the z coordinate to NDC space is done as follows:

/**
 Converts the given (linear) view z-coordinate to the (non-linear) NDC 
 z-coordinate.

 @param[in]     p_view_z
                The (linear) view z-coordinate.
 @param[in]     projection_values
                The projection values [view_projection22, view_projection32].
 @return        The (non-linear) NDC z-coordinate.
 */
float ViewZtoNDCZ(float p_view_z, float2 projection_values) {
    return projection_values.x + projection_values.y / p_view_z;
}

with the following dual C++ method:

/**
 Returns the projection values from the given projection matrix to construct 
 the NDC position z-coordinate from the view position z-coordinate.

 @return        The projection values from the given projection matrix to 
                construct the NDC position z-coordinate from the view position 
                z-coordinate.
 */
inline const XMVECTOR XM_CALLCONV GetNDCZConstructionValues(
    FXMMATRIX projection_matrix) noexcept {

    //        [ _  0  0  0 ]
    // p_view [ 0  _  0  0 ] = [_, _, p_view.z * X + Y, p_view.z] = p_proj
    //        [ 0  0  X  1 ]
    //        [ 0  0  Y  0 ]
    //
    // p_proj / p_proj.w     = [_, _, X + Y/p_view.z, 1] = p_ndc
    //
    // Construction of p_ndc.z from p_view and projection values
    // p_ndc.z = X + Y/p_view.z

    const F32 x = XMVectorGetZ(projection_matrix.r[2]);
    const F32 y = XMVectorGetZ(projection_matrix.r[3]);

    return XMVectorSet(x, y, 0.0f, 0.0f);
}

I obtain the following images after visualizing the shadow factor (this is not the light contribution of the omni light) (and some linear fog as well):

enter image description here enter image description here

This is clearly wrong. All six shadow maps seem kind of stretched. Without the stretching, the curtain's shadow would still be in the close vicinity of the curtain itself and the leaves' shadow would be much smaller (equal to the leaves' shadow for the spotlight). Furthermore, the circular shadow at the end of the pillar's shadow is associated to the buckets in front of the curtains.

Any ideas what goes or could go wrong?

For clarity, the following two images show the shadow factor of the cube map face corresponding to the spotlight, for the omni light by adding:

if (p_view_z != -p_view.y) {
    return 0.0f;
}

enter image description here enter image description here

The lit area seems a bit (how much?) larger than for the spotlight.

SampleCmpLevelZero

An other strange observation is that using a depth value of 1.1 for my spotlight's PCF filtering (SampleCmpLevelZero) always results in a 0 value as expected:

enter image description here

whereas for my omni light this is not the case:

enter image description here

Even if I use a value equal to 100000.0, I'll notice lit areas?

Depth Biasing

I use DepthBias = 100; (to prevent shadow acne; note that I use 16bit depth maps) SlopeScaledDepthBias = 0.0f; DepthBiasClamp = 0.0f; for all my Rasterizer states.

PCF filtering

I use the following sampler comparison state for PCF filtering (my shadow maps have no mipmaps) for both spotlights and omni lights.

D3D11_SAMPLER_DESC desc = {};
desc.Filter         = D3D11_FILTER_COMPARISON_MIN_MAG_LINEAR_MIP_POINT;
desc.AddressU       = D3D11_TEXTURE_ADDRESS_BORDER;
desc.AddressV       = D3D11_TEXTURE_ADDRESS_BORDER;
desc.AddressW       = D3D11_TEXTURE_ADDRESS_BORDER;
desc.MaxAnisotropy  = (device->GetFeatureLevel() > D3D_FEATURE_LEVEL_9_1) 
                                ? D3D11_MAX_MAXANISOTROPY : 2;
desc.MaxLOD         = D3D11_FLOAT32_MAX;
desc.ComparisonFunc = D3D11_COMPARISON_LESS_EQUAL;
matt77hias commented 7 years ago

I finally found the cause of the problem. There appears to be a problem with the shadow map of both the omni light and spotlight. While debugging, I noticed by accident that the light-view-to-light-projection (lview_to_lprojection) 00 and 11 matrix entries were not equal. Due to the aspect ratio of 1, both matrix entries must be equal. Furthermore, due to the FOV of $\pi/2$, these matrix entries must be equal to 1 (assuming no floating-point precision issues).

The constructor of my PerspectiveCamera class expects a width, height, FOVy, near, far order or aspect ratio, FOVy, near, far order and correctly handles the creation of the view-to-projection transformation matrix via redirecting to XMMatrixPerspectiveFovLH expecting a FOVy, aspect ratio, near, far order. Since, I did not want to create a corresponding PerspectiveCamera for each light per frame, I bypassed that class by directly providing a member method in my OmniLight and SpotLight classes redirecting to XMMatrixPerspectiveFovLH using the wrong order of arguments. This explains why the above images are wrong.

The generated shadow map of the spotlight is still equal to one of the six shadow maps of the omni light, although both use the same but wrong world_to_lprojection transformation matrix (which contains the wrong lview_to_lprojection transformation matrix). The HLSL code for the spotlight uses the cview_to_lprojection transformation matrix, which includes lview_to_lprojection, and thus will perform the "right" mapping. The HLSL code for the omni light uses the cview_to_lview transformation matrix, which does not include lview_to_lprojection, and thus will not take the non-uniform scaling of the 00 and 11 entries into account, resulting in some non-linear stretching and thus the wrong mapping. This explains why the above images differ for the omni light and spotlight.

After fixing the bug:

Omni Light (6 faces) shadow factor:

Omni Light (6 faces) Omni Light (6 faces)

Omni Light (1 face) shadow factor:

Omni Light (1 face) Omni Light (1 face)

Spotlight shadow factor:

Spotlight Spotlight

SampleCmpLevelZero

When using this method on a signed-normalized or unsigned-normalized format (which is the case), the comparison value is automatically clamped between 0.0 and 1.0. The two perspective cameras looking along the positive or negative x-axis (along the middle hallway) associated with the omni light, will have "rays" which are not occluded by the scene's geometry due to the far plane at distance 3. Therefore, these pixels (displaying the shadow factor) will always be lit for comparison values larger than or equal to 1.0f (since all such values are clamped to 1.0f).