o3de / sig-content

8 stars 12 forks source link

Proposed RFC Feature: Realistic Water System #117

Open SuperABC opened 1 year ago

SuperABC commented 1 year ago

Summary:

Water system is an important part for graphic engines. This proposal outlines several components in Atom for users to build realistic water areas and view its dynamic effects.

In general, a water system contains water surface rendering and underwater post processing. In order to obtain a highly realistic visual effects, most rendering features should be physically-based. In this proposal, we provide two implementations for rendering water surface and post processing. One uses hybrid ray tracing and the other uses screen space blending. Users can choose one of them when building their own water areas.

What is the relevance of this feature?

The water system is created based on Atom renderer. Without water system, users can build quite simple water surface by designing a dynamic normal texture and tiling it on a horizontal mesh surface. However, the water constructed in this way looks somewhat fake, and it's troublesome to add effects like underwater post processing. As to our water system, it provides a new material type for water surface designing and some components for underwater post processing.

The difference between water system and most other features based on Atom is that water is completely dynamic. The moving wave and shining highlights greatly improved the visual realism of the virtual scene.

Feature design description:

The water system can be separated into twelve small features.

Water wave designing

In practice, we found two types of water wave which has a performance like real water.

The first one is Gerstner wave. Gerstner wave is a variant of sine wave. With the combination of several Gerstner waves with different parameters, we can get water surface with high sense of reality. The more base Gerstner wave we have, the better the water surface performs. The base Gerstner wave position and normal calculation is shown below.

float3 GerstnerWave(float2 pos, 
    float amplitude, float sharpness, float lambda, float speed, float direction,
    out float3 normal) {

    float3 output = float3(0.0f, 0.0f, 0.0f);

    float2 dir = normalize(float2(cos(direction), sin(direction)));
    float omega = (2 * PI) / lambda;
    float phase = m_time * speed;
    float t = phase + omega * dot(dir, pos);

    output.xy = dir * sharpness / omega * cos(t);
    output.z = amplitude * sin(t);
    normal.xy = -dir * omega * amplitude * cos(t);
    normal.z = 1 - sharpness * sin(t);
    normal = normalize(normal);

    return output;
}

With this calculation, we have two approaches to combine the base waves we have. The first one is to assign all the base wave parameters by the artist. When we want a simple water surface with less variates, it's easy to build one in this way. However, if we want water surface with somewhat complexity, assign all wave parameters become troublesome. Thus we provide another way to build water surface which based on cascading. For one cascade level, the water surface is build with 8 base Gerstner waves. The only thing that artists need to do is adjusting the wave scalar of each level. Specifically, the cascade side length varies from 1/32 meter to 64 meter, 12 levels in total.

Another wave type is rotation iteration. This type of wave performs much better when rendering waves with small wavelength. It's algorithm code is shown below.

float2 WaveDx(float2 position, float2 direction, float speed, float frequency) {
    int count = max(direction.x, direction.y) * frequency / (2 * PI);
    frequency = count * (2 * PI) / max(direction.x, direction.y);
    float phase = m_time * speed;
    phase -= (int)(phase / (2 * PI)) * 2 * PI;
    float x = dot(direction, position) * frequency + phase;
    float wave = exp(sin(x) - 1.0);
    float dx = wave * cos(x);
    return float2(wave, -dx);
}
float RotIter(float2 position, int iterations, float phase, float speed){
    float iter = 0.0;
    float weight = 1.0, w = 0.0, ws = 0.0;
    for(int i = 0; i < iterations; i++){
        float2 p = float2(sin(iter), cos(iter));
        float2 res = WaveDx(position, p, speed, phase);
        position += p * res.y * weight * 0.25;
        w += res.x * weight;
        iter += 4.0;
        ws += weight;
        weight = weight*0.75;
        phase *= 1.18;
        speed *= 1.07;
    }
    float result = w / ws - 0.5f;
    return result;
}

However, in this way it's not possible to add large wave into water surface. In order to supplement this, we still allow users to add cascaded Gerstner wave together with it.

Color designing

Water color, including water surface color and underwater color, has great influences on visual realism.

For water surface color, we need color gradient to express translucency. The view direction and light direction both influence the final color of water surface, so in this water system we separate them and calculate each component of them. The input parameter color refers to the base water color, trans refers to the color of light which goes into and out of the water then enter the camera, gaze refers to the scattered light integrated in current view direction.

float3 WaterColor(float3 color, float3 trans, float3 gaze, float3 pos, float3 norm) {
    float cosView = dot(viewDir, norm);
    float3 deltaGaze = colorGaze - color;
    color += deltaGaze * cosView;

    float3 halfDir = normalize(ligthDir + norm * 0.2f);
    float cosLight = dot(viewDir,-halfDir);
    float3 deltaTrans = colorTrans - color;
    color += deltaTrans * cosLight;

    return color;
}

For underwater color, we need to consider about volume rendering. However, the performance of real-time rendering requires us to simplify the traditional volume rendering method. In this water system, underwater color is decided by the dot product of view direction and light direction.

float3 WaterBelowColor(float3 color, float3 view, float3 light, float3 scalar){
    color *= underwaterLighting(color, scalar);

    float vdl = 0.5 + 0.5 * dot(view, light);
    color *= smoothstep(0.5f + 0.5f * vdl);
    return color;
}

Thus we have color gradient both on water surface and underwater.

Direct lighting

In O3DE, we have three types of light which are directional light, punctual light and area light. These three types of light will generate highlight on the water surface because of ray reflection.

Similar to other materials like standardPBR, we calculate the specular reflection of each light using correspondent interface. However, there are some difference between water material and standard material. The lights may go into the water and generate refraction highlights which also need to be calculated.

if(dot(viewDir, normal) < 0) {
    for(light : directionalLights)ApplyDirectionalLights(light);
    for(light : punctualLights)ApplyPunctualLights(light);
    for(light : areaLights)ApplyAreaLights(light);

    for(light : directionalLights)ApplyDirectionalLights(refract(light, normal));
    for(light : punctualLights)ApplyPunctualLights(refract(light, normal));
    for(light : areaLights)ApplyAreaLights(refract(light, normal));
}
else {
    normal = -normal;

    for(light : directionalLights)ApplyDirectionalLights(light);
    for(light : punctualLights)ApplyPunctualLights(light);
    for(light : areaLights)ApplyAreaLights(light);

    for(light : directionalLights)ApplyDirectionalLights(refract(light, normal));
    for(light : punctualLights)ApplyPunctualLights(refract(light, normal));
    for(light : areaLights)ApplyAreaLights(refract(light, normal));
}

According to this calculation process, we can see not only the reflection of lights in same side with camera, but also those in other side.

Contact foam

There are three types of foam in real world which are contact foam, wave tip foam and interaction foam. In this water system, we implemented contact foam based on foam texture.

In practice, it is implemented by using a three layers noise texture. The rgb channel of the texture stands for the three layers with different noise density. When rendering the foam, blend the three layers based on water depth of current position. To get the water depth, we use the difference between the depth of whole scene and the depth of scene without water. This may leads to some artifacts when rendering objects with complex geometry, but performs well in most situations. The usage of the foam texture is shown below.

result = 
    foamBlendR(foamTexture.r, depth) +
    foamBlendG(foamTexture.g, depth) + 
    foamBlendB(foamTexture.b, depth);

Dynamic normal texture

As those water surface made only by assigning some textures on a horizontal mesh, our water system also allows artists to tile textures on dynamic water waves in the same way.

In practice, because the water should be seen as a dynamic entity, the texture of it should not be static. To make a static texture move randomly, we add 4-way chaos on the input texture.

float ChaosMap(texture map, float2 uv, float speed) {
    float value = 0.0f;

    value += map[uv + float2(m_time * speed, 0)];
    value += map[uv + float2(0, m_time * speed)];
    value += map[uv + float2(-m_time * speed, 0)];
    value += map[uv + float2(0, -m_time * speed)];

    return value / 4.0f;
}

Approximate subsurface scattering

Water is a kind of transparent material and we need to concern about the subsurface scattering of it.

When we look towards the sunlight, the water rendering result should be a little brighter. To approximately calculate this feature, we use the formula placed below.

float3 H = normalize(lightDir + normal * distortion);
float I = pow(dot(viewDir,-H), power) * scalar;
result = color * I;

However, it is not physically-based. In this situation, the light goes into the water then scattered out and then enter the camera or eye. So the effect varies with the direction of the sunlight. Besides, This feature is evident when the amplitude of the water wave is large for its normal changes greatly.

Dynamic mesh cascading

Our water system provide two ways for users to organize the water surface mesh. The first one to use Mesh component. The advantage of this method is obvious, that is, it does not need to add redundant components, nor does it cost redundant time. However, in most situation, we may watch the water surface in different positions. The disadvantage of using static mesh is that the density of mesh does not change with our moving camera. Usually we hope to have a denser mesh nearby, and a more sparse mesh in the distance. To achieve this goal, dynamic mesh cascading is used.

Dynamic mesh cascading means that the mesh of water surface may change in runtime to keep denser mesh nearby. The smallest cascade rectangle has the side length of 1/4 meter while the larges cascade rectangle corresponds 16384 meters. In total, there are 16 levels of cascading.

When we use mesh cascading, it will cause a little cpu time cost to rearrange mesh vertices. However, It will save a lot of time cost compared with meshes with single vertex density when rendering water surface with quite large area.

Hybrid ray tracing

One way to do reflection and refraction is to use hybrid ray tracing.

The water reflection and refraction can be both regarded completely specular. So 1 spp is enough and there is no performance problem or monte carlo noise problem. Trace the ray from the world position of the rendering pixel and along the reflect or refract direction. And the reflection result simply equals to the tracing result while the refraction result need to blend with the water color. After tracing and getting the result of reflection and refraction, we use Fresnel formula to blend them.

reflectColor = trace(scene, reflectDir);
refractColor = trace(scene, refractDir);
refractColor = waterBlend(refractColor, waterColor, depth);
result = fresnelBlend(reflectColor, refractColor, cosTheta);

Because only 2 tracing operation is need for one pixel on water surface, the performance is not affected much.

In addition to refraction and reflection, this method also needs to do underwater post processing, which is introduced later.

There's a disadvantage of hybrid ray tracing, that is, it can only run on the devices which support ray tracing. For example, mobile devices probably cannot run ray tracing at all.

Screen space blending

Contrary to hybrid ray tracing, we provide another component which uses screen space color blending to do post processing.

Blending means the render result is composed of forward rendering result and water color. Clearly it's impossible to do real refraction in this way. In order to make the water more real, we have to add texture disturbance according to the corresponding wave normal. The calculation of disturbing movements is shown below.

float3 disturbe(float3 norm, float index) {
    float3 refractDir = normalize(refract(viewDir, norm, index));
    float3 shiftPos = worldPos + depth * refractDir;
    float4 shiftViewPos = mul(MVPMatrix, float4(shiftPos, 1.0f));
    shiftViewPos /= shiftViewPos.w;
    float2 shiftUv = (shiftViewPos.xy / 2 + 0.5f) * dimensions;

    float3 refractDirStd = normalize(refract(viewDir, float3(0, 0, 1), index));
    float3 shiftPosStd = worldPos + depth * refractDirStd;
    float4 shiftViewPosStd = mul(MVPMatrix, float4(shiftPosStd, 1.0f));
    shiftViewPosStd /= shiftViewPosStd.w;
    float2 shiftUvStd = (shiftViewPosStd.xy / 2 + 0.5f) * dimensions;

    sampleIndex = targetPixel + (shiftUv - shiftUvStd);
    return forwardColor[sampleIndex].rgb;
}

This algorithm calculates the screen space position difference between water with horizontal plane and with dynamic waves. The disturbing movements equals to this difference. In this way, although we do not render real refraction effects, we can still get dynamic movements which make water bottom look real.

Underwater post processing

When the camera is under the water surface, we should add post processing to the render result of forward pass.

The first visual effect is the water color blending based on view depth. Refers to volume rendering, we see water body as uniformly scattered medium, the amount of water color to be blended is exponential.

The second visual effect is the turbidity of water. The area faraway from camera under the water cannot be seen clearly. To realize this situation, we add depth-based Gaussian blur to underwater pixels.

Surface bloom.

When the sunlight shining above the water surface, we can see the highlight sparkling from time to time. In theory, it is a kind of bloom effect.

The highlight is caused by specular reflection of lights. However, the effect generated by existing Bloom component in O3DE is not appropriate. The sparking usually looks like stars. In our water system, the sparking effect uses the highlight as input, and apply directional Gaussian blur in six directions. In this way, the highlight will looks like hexagram.

float3 bloom() {
    float3 color = float3(0.0f, 0.0f, 0.0f);

    for(int i = 0; i < 6; i++) {
        float2 direction = SampleDirection[i];

        float3 sum = float3(0.0f, 0.0f, 0.0f);
        float weight = 0.0f;
        for(int j = 0; j < 16; j++) {
            float3 sampleColor = m_inputColor[targetPixel + j * direction].rgb;
            sum += distanceWeight(j) * sampleColor;
            weight += distanceWeight(j);
        }
        color += sum / weight;
    }
    return color;
}

To have a better performance, the directional blur is separated into two steps. The first step reads the input texture and samples along the specific direction every 4 or more pixel. The result of this operation may looks like dash lines. Then we come to the second step, do directional blur with with 4 or more pixel and the dash line becomes continuous gradient. In this way we can get blur effects up to 16 pixels with only 8 sample operations.

Technical design description:

The parameters used in forward pass is integrated in a new material type called water material type. These parameters can be separated into five groups which are general settings, wave shape management, direct lighting properties, subsurface scattering properties and foam properties.

The water forward pass read these input and generates diffuse and specular and other textures as origin forward pass do. The differences between them are, however, water forward pass will output another two textures which stands for water highlights and water gradient color.

Render pipeline design

The water render pipeline is laid in opaque parent pass. The reason is that the rendering of water has finished blending itself and does not need the O3DE unified transparent blending logic.

The pass organization of water is shown in this json structure.

"OpaqueParentPass": [
    "Origin forward passes",
    "GI passes",
    "Reflection passes",
    "AO passes",
    "Water wave pass",
    "Water forward pass",
    "Water effect parent pass" : [
        "Water caustic pass",
        "Water bloom pass",
        "Water foam pass",
        "Water post process pass",
    ]
]

Mesh cascading component design

Mesh cascading is designed as a independent Gem and its class diagram is shown below. It's job is to update the mesh according to the camera position every tick.

To implement this feature, we designed a class called WaterLodManager. This class calculates the position and normal of each lod mesh.

class WaterLodManager {
        public:
        WaterLodManager() {}
        ~WaterLodManager() {}

        void GetInitBuffer(AZStd::vector<uint32_t>& indices,
                           AZStd::vector<float>& positions, AZStd::vector<float>& normals,
                           AZStd::vector<float>& tangents, AZStd::vector<float>& bitangents, AZStd::vector<float>& uvs);
        void UpdateLod(AZ::Vector3 cam, AZ::Vector3 scale, AZ::Vector3 trans,
                       AZStd::vector<float>& positions, AZStd::vector<float>& uvs);
        void UpdateIndices(int lowerbound, int upperbound, AZStd::vector<uint32_t>& indices);

        AZStd::array<WaterLodNode, 16> lodNodes;
};

Post processing component design

Water post processing is implemented in two ways. One way is to use hybrid ray tracing, and the other way is to use simple color blending. The parameters of the to approaches are totally same and the color blending panel is shown below.

image

What are the advantages of the feature?

What are the disadvantages of the feature?

How will this be implemented or integrated into the O3DE environment?

The implementation of water contains three parts: mesh organization, forward mesh rendering and post processing.

As is mentioned above, there are two ways to organize water surface mesh. The first one uses Mesh component, and do not need any extra implementation. The other one uses cascaded mesh, and the corresponding code is encapsulated in HWWater Gem.

For forward mesh rendering, this water system uses the combination of material type and corresponding shader.

The last, post processing for water surface and underwater area, is also implemented with two component. Compared with ScreenSpaceWater, the HybridRTWater component relies on raytracing feature processor. Properties for water is stored in SceneSrg and RaytracingGlobalSrg simultaneously.

Are there any alternatives to this feature?

The alternatives for users are the underlying algorithm and the way to organize water surface mesh.

How will users learn this feature?

We will provide examples of materials and accompanying documentation. The example materials correspond to oceans, lakes, rivers and so on.

Are there any open questions?

A main problem when rendering violent water wave is the way to calculate the contribution of rays which goes into and out of water twice or more. However, these contributions has great influence on the performance of transparency. In this water system, transparency is implemented using wave position and normal which only considered the ray which goes into water once.

mbalfour-amzn commented 1 year ago

This looks great!

SuperABC commented 1 year ago

This looks great!

  • Suggestion: Right now, this says it can provide water volumes defined by a Mesh or I think an infinite plane. It would be awesome if there was a third option to define it using the Shape components.
  • Q: Is there a way to define the bounds or depth of "underwater"? This is necessary when creating ponds above caves, rooftop swimming pools, etc., where there is a dry playable area underneath the water.
  • Q: Will the tuning parameters be exposed to the BehaviorContext so that they can be changed dynamically at runtime?
  • Q: Are there any plans or designs for aligning this with physics? I assume we could create something roughly aligned by authoring separate force volumes, but for large waves it would be noticeable if floating objects didn't align with the rendered waves.
galibzon commented 1 year ago

Approved.

DoItForGrandpa commented 11 months ago

Hey @SuperABC just wondering if this is still in the works, or if this has been more abandoned since march? I am most curious because if it is not being worked on I would love to know because a project I am working on needs water, and need to know if this is a possible viable option or not. Thanks in advance!