o3de / sig-graphics-audio

Documents and communications for the O3DE Graphics-Audio Special Interest Group
12 stars 14 forks source link

Proposed RFC Feature: Material Canvas Shader generation #57

Open mekhontsev opened 2 years ago

mekhontsev commented 2 years ago

The text covers files and data organization that includes connections between generated documents, and existing approach extensions. The solution is based on the material canvas introduced in the Material canvas feature RFC, and considered as a starting point for the further discussions (pre-RFC in terms of the Feature template).

Helpful information:

Summary:

The shader generation logic creates all necessary infrastructure files in two cases:

All the files are created in a template-file-based manner using substitutors and node configuration's shader extensions. The following asset's processing is unchanged and works as is.

Feature design description:

Material type based node

O3DE has several base material types such as StandartPBR, BasePBR, and so on, and these material types can be treated as a kind of Output node in MC. In other words material type-based node with input slots only. That kind of node uses already implemented azsl shaders, shader descriptor files, and represents all the input properties as slots.

Technically it can be implemented as a generated materialtype file that copies "propertyLayout", "shaders", "functors", and "uvNameMap" from the base. This file is going to be used for further extensions including generated pair shader/azsl files, and additional properties retrieved from the graph's unset slot list.

In addition, the base material could be chosen on canvas creation (as we do in the material editor) with the ability to choose using properties. After it, the output node will be added to the canvas as an Output node.

materialcanvasnode.azasset extension

The node in the current solution is described in the *.azasset file, and the structure looks approximately as:

{
    ...
    "ClassData": {
        "category": "Constants",
        "title": "Constant Color",
        "propertySlots": [
            {
                "name": "...",
                "supportedDataTypes": [],
                ...
            }
        ],
        "outputSlots": [
            {
                "name": "...",
                "supportedDataTypes": [],
                ...
            }
        ],
        "inputSlots": [
            {
                "name": "...",
                "supportedDataTypes": [],
                ...
            }
        ],
    }
}

and this file is extended:

  1. Introducing ClassData.inputSlots.templateSubstitutor field

Contains the substitutor's name that is used during code generation as a substituting point in the node's shader template file, which is going to be discussed later. Simply: slot's name.

  1. Introducing ClassData.azslTemplate field

The name of node's shader template file containing azsl code with subtitutors.

  1. Introducing ClassData.baseMaterial field

Contains alternative input slots description based on existing *.materialtype (ex: PBR base) and enumerates propagating as slots properties. 'Propagating properties' in this context means viewable, and accessible in the graph.

"baseMaterial": {
    "type": "BasePBR.materialtype",
    "propagatingSlots": [
        {
            "ref": "baseCololor.color",
            "displayName": "Diffuse color",
            "templateSubstitutor": "DIFFUSE_COLOR"
        }
    ]
}
Subtitutors

The current approach supposes hardcoded substitutor names and dynamically described unique names in configuration:

and probably more in future.

AZSL template

The node's azsl shader code contains substitutors. There are two examples the first is more verbose describing the whole shader template, and the simple node template.

It's better to show real examples. The first is connected with the material-based node, also known as the output node. It's a whole shader skeleton:

#include <Atom/Features/PBR/DefaultObjectSrg.azsli>
#include <viewsrg.srgi>

%INCLUDES%

ShaderResourceGroup ShaderParams: SRG_PerMaterial
{
    %SHADER_PARAMS%
}

struct VertexInput
{
    float3 m_position   : POSITION;
    // ...
};

struct VertexShaderOutput
{
    // ...   
};

%FUNCTION_DEFINITIONS%

VertexShaderOutput MainVS(VertexInput IN)
{
    VertexShaderOutput OUT;
    // Some code ...
    return OUT;
}

ForwardPassOutput MainPS(VertexShaderOutput IN)
{
    %SOURCE_CODE%

    ForwardPassOutput OUT;

    OUT.m_diffuseColor      = %DIFFUSE_COLOR%;
    OUT.m_albedo            = float4(0.0, 0.0, 0.0, 1.0);

    return OUT;
}

The second is just a brick that adds to the skeleton of its code. There is a color combiner node template:

#include <viewsrg.srgi>

float3 %foo%(float r, float g, float b)
{
    return float3(r, g, b);
}

void %NODE_FUNC%(%IN_TYPE1% r, %IN_TYPE2% g, %IN_TYPE3% b, %IN_TYPE4% a, out float4 result)
{
    result = float4(%foo%(r, g, b).xyz, a);
}
AZSL template variations

In this example the result type depends on first argument type:

void %NODE_FUNC%(%IN_TYPE1% lhs, %IN_TYPE1% rhs, out %IN_TYPE1% result)
{
    result = lhs + rhs;
}

This example demonstrates array argument type:

void %NODE_FUNC%(%IN_TYPE1% arr[%IN_SIZE1%], out %IN_TYPE1% result)
{
    result = 0;
    for (int i = 0; i < %IN_SIZE1%; ++i)
    {
        result = result + arr[i];
    }
}

According to the last examples, a shader node function can support multiple types and has to be generated several times for every unique connected slot type. It pushes us to the solution where we have to make a file parsing process that we want to avoid. Alternatively, we can make up special layout constructions that can be found easily.

@GenericFunctionBegin

void %NODE_FUNC%(%IN_TYPE1% lhs, %IN_TYPE1% rhs, out %IN_TYPE1% result)
{
    result = lhs + rhs;
}

@GenericFunctionEnd
Generation

The graph is topologically sorted with the output (material-based) node as a root. During the traverse, the code generates unique property and function names and performs substitutions using the slot's connections. All unset slots become property layout properties.

What are the advantages of the feature?

What are the disadvantages of the feature?

Are there any open questions?

gadams3 commented 2 years ago

This is great! It is very similar to what we have been thinking, what the original RFC is proposing, and what I'm working on now, in addition to the autosave and auto generation features. I added some appendices to that document with notes about building templates and injecting all of the necessary data for AZSL, material type, material, and shader generation. Similar to what is described here, commented out begin and in markers are placed in shader template source files. These will be the locations where the “code generator” will insert functions, classes, other definitions, includes, and other data exactly as you have described here. There are already general purpose settings inside of the dynamic node and slot configuration structures where all of the data for code snippets, include files, and other dependencies can be specified. Please have a look and let's continue discussing.

gadams3 commented 2 years ago

To answer some of the open questions...

The .shader.template files will be part of the template, providing the minimum amount of configuration for each shader that will be produced by the material graph. These template files will be the exact same format and serializable as shader source data structure. The generation process will fill out or substitute any missing data before being saved as complete shader files.

The same will be done for material type template files. For the most part, the material graph, dynamic node configurations, and shader template files should provide enough information to assemble material type files without including one in the template. However, there are some extra settings per shader referenced in the material type file. Many of the existing material types don't set that data. If that data is not necessary then the material type file can be empty or excluded.

The material canvas RFC mentions reflecting as many classes as we can to edit and behavior contexts. Setting as many classes up as possible with the edit context will give us virtually free property editor-based tools for those types. I've already done this for dynamic node and slot configurations, shader source data, shader variant source data, and related types. This is the PR for that with screenshots of working reflected property editors or editing those types https://github.com/o3de/o3de/pull/10204. This will be useful for creating an editor to create new templates and new node types. Even without material canvas, this also makes it useful to edit existing shader files.

Adding behavior context while adding edit or serialize context takes very little time and opens options for script-based automation, testing, tooling, code generation, transformation, traversal, validation, and extension without hardcoding anything. Many things will be hardcoded but it gives us options. Many of the buses that are already bound to behavior context are the same as those material editor and landscape canvas use.

Exposed properties, assuming you're talking about the properties exposed through material types and in the material editor, should not need to be predefined. Aside from template creation, one of the goals should be for users to not have to manually edit material type or other files. We will be able to determine material type variables from either the nodes in the graph or something closer to the script canvas variable manager. Those will be used to populate the generated material type file. Since you mentioned existing material types like standard PBR and extended PBR etc, even though a template may be based on similar or the same shader code, the authored graph may only expose a couple of custom options and properties.

We were originally considering implementing an export button as well as having the asset processor pick up an auto process material graphs. The export button will not be necessary for a couple of reasons. I have already implemented an autosave feature that can be enabled in material editor, material canvas, shader management console, and any other tools built on that code base. The asset processor will pick up whatever files are saved as it normally does. If that is enabled and preview times are responsive enough, we will get automatic previews for the main viewport. Along those same lines, we can also automatically generate temporary data to be shown in the main viewport and per node previews. Again, a lot of this is going to depend on turnaround times going through the asset processor because shader compilation times are currently not the fastest.

For the last question about connecting the node to shader snippets and other data, the node configurations and slot configurations already include data for adding arbitrary settings. Have a look at the updated appendices on the material canvas RFC.

moudgils commented 1 year ago

Since this RFC is accepted please open a PR and move this RFC to this folder - https://github.com/o3de/sig-graphics-audio/tree/main/rfcs where we will track all the new RFCs for book keeping purposes. Thanks.