Open kh-huawei opened 1 year ago
How would this cope with exposure of textures and handling them via the definition of a surface? was this taken into account?
Like the RayTracingMaterialSrg does now: the Material-Info struct holds the indices of the Texture Views relevant for the material instance, and the textures are then accessed via the bindless Srg.
There are actually two indirections going on currently: The MaterialInfo holds one start-index into the m_materialTextureIndices
, and that buffer holds the index of the bindless texture view (either in the bindless m_materialTextures
array in the same srg, or the Bindless_Srg - index directly). Seems like an optimization so the materialInfo-struct only needs one texture-index, and/or so that the texture-views for one material-instance are sequential in memory, but i'm not sure how useful that actually is.
Sounds like a good set of changes overall, and will make ray-tracing in O3DE competitive with other engines. 1) Can you elaborate on "Refactor the existing material shader code and move the LightAssignment - Loop and the Shadow-sampling out of the EvaluateLighting - function", specifically can you give more details as to what would that refactoring look like? Perhaps with some sample code? 2) Could you also explain a bit more what the "Breaking changes with existing (old) materials" would be and any potential solutions (running old materials through an update script?) would be.
3) What would be the replacement light culling strategy given the current one is view-dependent?
1.) Refactor the existing material shader code and move the LightAssignment - Loop and the Shadow-sampling out of the EvaluateLighting - function:
EvaluateLighting_BasePBR
and LightingData_BasePBR
:EvaluateLighting_BasePBR()
lightingData = InitializeLightingData_StandardPBR()
CalculateLighting_BasePBR(lightingData)
ApplyDecals(lightingData.tileIterator, surface);
ApplyDirectLighting(surface, lightingData, screenPosition);
ApplyPointLights(surface, lightingData);
For each light in view-space tile: (LightAssignment)
Check Lighting Channel
ApplyPointLight(lightIndex, surface, lightingData)
calculate falloff
sample Point light shadow cubemap
calculate diffuse / translucent / specular lighting
ApplyIblForward(surface, lightingData);
lightingData.FinalizeLighting(surface);
This doesn't change the structure much, but makes the responsibilities a bit clearer:
EvaluateLighting_ForwardPlus
LightingData_BasePBR
, but also expose a InitializeLightingData
functionApplyPointLight_BasePBR
lightingData = InitializeLightingData()
EvaluateLighting_ForwardPlus(lightingData)
ApplyDecals(lightingData.tileIterator, surface);
ApplyDirectLighting(surface, lightingData, screenPosition);
ApplyPointLights(surface, lightingData);
For each light in view-space tile: (LightAssignment)
Check Lighting Channel
ApplyPointLight(lightIndex, surface, lightingData)
calculate falloff
sample Point light shadow cubemap
Call material-defined ApplyPointLight() <- new!
ApplyIblForward(surface, lightingData);
lightingData.FinalizeLighting(surface);
This would allow me to then implement a CalculateLighting_RayTracing
with a different light-assignment and shadow method, but i could still support the BasePBR
- material essentially unchanged.
2.) Breaking Changes: This would mostly affect the EvaluateSurface - function, and look roughly like this:
Surface EvaluateSurface_BasePBR(...)
{
Surface surface;
...
float2 baseColorUv = uvs[MaterialSrg::m_baseColorMapUvIndex];
real3 sampledColor = GetBaseColorInput(MaterialSrg::m_baseColorMap, MaterialSrg::m_sampler, baseColorUv, real3(MaterialSrg::m_baseColor.rgb), o_baseColor_useTexture, uvDxDy, customDerivatives);
real3 baseColor = BlendBaseColor(sampledColor, real3(MaterialSrg::m_baseColor.rgb), real(MaterialSrg::m_baseColorFactor), o_baseColorTextureBlendMode, o_baseColor_useTexture);
baseColor = BlendVertexColor(baseColor, real3(vertexColor.rgb));
...
}
Surface EvaluateSurface_BasePBR(MaterialParameters params, ...)
{
Surface surface;
...
float2 baseColorUv = uvs[params.m_baseColorMapUvIndex];
real3 sampledColor = GetBaseColorInput(params.m_baseColorMap, SceneMaterialSrg::m_defaultSampler, baseColorUv, real3(params.m_baseColor.rgb), o_baseColor_useTexture, uvDxDy, customDerivatives);
real3 baseColor = BlendBaseColor(sampledColor, real3(params.m_baseColor.rgb), real(params.m_baseColorFactor), o_baseColorTextureBlendMode, o_baseColor_useTexture);
baseColor = BlendVertexColor(baseColor, real3(vertexColor.rgb));
...
}
This does mean every (user-created) material needs to be changed, since the MaterialSrg won't exist anymore.
We could keep the MaterialSrgs arround to stay somewhat backwards - compatible, but that would mean we need to bind both, the SceneMaterialSrg
and the MaterialSrg
, and any material that uses this won't work properly with Raytracing, and we'd have to recognize this somehow when creating the Hit - Shadertable
3.) In our Raytracing pipeline we are currently using a naïve fixed-size worldspace grid, with a TODO to make it less naïve, but so far neither the performance nor the memory impact was enough of an issue to increase the priority of that TODO. I'd go with something similar, unless someone has a better suggestion.
I am supportive of these changes. To be honest I like the bindless approach although given that this change will modify the render pipelines needed for mobile hw we will need the non-bindless approach to stick around. Could we have a setup where by the RT shaders can assume bindless approach (RT compliant hw are also bindless compliant) but the non-RT shaders assumes non-bindless at start and we add bindless support later on.
ShaderResourceGroup SceneMaterialSrg
{
//non-bindless params
StructuredBuffer<> m_MaterialParameters[];
#if USE_BINDLESS_SRG
//Bindless params
StructuredBuffer<> m_MaterialParameters
#endif
}
So the SceneMaterialSrg will contain two different layouts separated via macro USE_BINDLESS_SRG (as shown above), runtime c++ code also uses the same macro and handles binding for both layouts but the shaders that are using the SceneMaterialSrg can make an assumption and only pick one implementation to help lower the maintenance burden. The disadvantage is that on PC you are paying the cost (perf+mem) of both the layouts.
Eventually I would like to see the engine transition over to Bindless support globally but that is not possible in one go and will have to go through a transition where we support two implementations and then eventually drop one over time. Once we drop mobile min spec to lets say iPhone 11 and equivalent we could in theory switch to bindless globally but that may not happen for another few years at the very least.
Summary:
During raytracing the hit-shaders need access to all materials in the scene at once, which clashes with the current setup for the Forward+ - Pipeline, where a shader needs only access to a single Material Instance at any given time. The current solution for Raytracing is somewhat unfinished, in that it assumes that there exists only one material-type, and collects all Material-Instances in a single
RayTracingMaterialSrg
, and uses one single hit-shader that re-implements almost but not quite the BasePBR - material type.What is the relevance of this feature?
The new Material-Pipeline and the Material Canvas allows users to easily create their own material types, but that has no influence on the material used during raytracing. This feature would remedy that, and enable user-created material types during raytracing.
Generally, any meaningful raytracing pipeline needs access to the geometry and materials of the entire scene in one shader / renderpass. Also the current material-shaders place the LightAssignment and the Shadow Sampling inside of the EvaluateLighting - function, which implicitly makes them the responsibility of the material type; in other words, using the BasePBR - Materialtype (or any material-Type derived from it) currently requires a viewspace tile-based light assignment and e.g. cascaded shadow maps for some lights, which doesn't work all that well for a raytracing pipeline that can intersect offscreen objects.
Incidentally, all of this would also make implementing a deferred pipeline easier.
Feature design description:
Each Material-Instance creates an fills a
MaterialParameters
ByteAddressBuffer that contains the material parameters, texture use flags and bindless texture read indices needed during rendering of the material. The MeshFeatureProcessor creates and maintains a singleSceneMaterialSrg
that holds one buffer with the bindless read indices of theMaterialParameters
, with one entry for each separate Material Instance (i.e. each sub-mesh), and the texture samplers required by all materials. Alternatively, if the support of bindless SRGs is not good enough, this SRG can also hold an array of all required material textures (deduplicated) and an array of theMaterialParameters
- buffers, with the read-index then referring to the entry in this buffers instead of the Bindless SRG. Also the first value in eachMaterialParameters
ByteAddressBuffer should be the Material Type Id, so the shader can perform a sanity check and avoid parsing parameters from the wrong material instance.The Material Type Id should be calculated from the material type and the shader options of the material, since the shader-options can't be in the
SceneMaterialSrg
that is shared between shaders.Technical design description and implementation steps:
Rough draft of the required steps:
RPI::Material
- class to create and fill oneMaterialParameters
- ByteAddressBuffer for each MaterialInstance that holds the parameter values and the bindless read indices of the material textures.SceneMaterialSrg
.SceneSrg
directlyMeshInfo
struct of the RayTracingSceneSrg.MaterialSrg
directly.RayTracingSceneSrg::DirectionalLight
andSceneSrg::DirectionalLight
) of the RayTracingSceneSrg and the SceneSrg into a single azsli - fileWhat are the advantages of the feature?
Actually support user-provided materials during raytracing.
What are the disadvantages of the feature?
Breaking changes with existing (old) materials
Are there any alternatives to this feature?
There are a few options in how and when the Material-Parameters could be stored and updated. But the current solution is an alternative in that it pretends there is only one PBR - material type in the scene, which does cover many use-cases.
How will users learn this feature?
Are there any open questions?
SceneMaterialSrg
limits the materials to the sampler configurations we explicitly provide there, but i don't have a better idea there.ViewSrg
/RayTracingSceneSrg
to theSceneSrg
? As far as i can tell the contents of these buffers is the same.