Closed csyonghe closed 1 year ago
One thing to note: you should be able to create a single translation unit request and then add a bunch of .slang
files to it (both the ones with the entry points, and also whatever files define the types you need), and then the lookup should work. It is not generally needed to have more than one translation unit if you are just compiling .slang
files (multiple translation units in a compile request is really to support applications that put their VS and PS in different .hlsl
files).
On the bigger issue, the situation right now is that the Slang C API is quite limited when it comes to this stuff because it is stuck working with strings all the time.
If you look in compiler.h
, you'll find that the behind-the-scenes types that are used to implement the public API have been revamped to try to provide a more powerful model of compilation that we now need to design a public API around.
The big-picture idea is that you can create a Linkage
which tracks a set of loaded code modules. Any front-end compile request is done in the context of a Linkage
, and anything that gets import
ed gets loaded into and cached on the linkage.
A front-end compile request produces one Module
per translation unit, and it is also possible to just load a Module
by its name on a Linkage
(which effectively does what an import
would do). The Module
owns both the AST and the IR code, and can be used to look up types and entry points by their names.
A Program
is a container that gets created from a Linkage
and contains a list of entry points, and a list of "referenced modules." When we go to generate code, we always generate it from a Program
, so its list of referenced modules defines what shader parameters are going to show up in the output code (we don't just emit everything in the linkage...).
Both Program
s and EntryPoint
s can be specialized by providing concrete arguments for their generic/existential type parameters. There has been some refactoring in check.cpp
to separate it out so that the string-based APIs resolve the argument strings into exrpessions, and then invoke a lower-level routine that expects the arguments to be passed in as AST object (e.g., ones looked up via reflection).
When a Program
or EntryPoint
gets specialized, we carefully update its list of referenced modules to include the modules that define any types used as arguments; this ensures that if you use a type from Foo.slang
to specialize your program, the output code will be as if Foo.slang
had been import
ed.
So, all that new machinery is in place at the lower level, but we haven't built up any API around it just yet (I'm trying to decide whether this is the point where we should jump to doing a COM-lite API rather than keeping on going with the current C API).
One thing to note is that the current strategy here for Linkage
s and reflection is that we'd retain the AST for modules that get compiled for the lifetime of the Linkage
, and the pointers a user gets through the reflection API would continue to be pointers to the actual AST objects. This works nicely because then when it comes time to specialize something we have the actual AST objects we need to drive specialization, but it also creates a risk of memory bloat because we are keeping the full AST alive for longer.
Eventually the goal would be that a Linkage
maintains the Module
s only in their serialized form (serialized AST + serialized IR), and it vends out lightweight proxy objects that wrap the serialized information for reflection purposes. Actions like loading a new module or generating back-end code would then deserialize what they need for the particular operation, but then drop those objects after the operation completes. Actually implementing that memory-management policy will take a while, though, since we haven't even started in on AST serialization.
I see. Thanks for the clarification. Exposing the new API will make everything much cleaner to work with.
I am currently implementing a cross-platform graphics library that sits above D3D and Vulkan for doing all the shader component stuff using Slang. One of things that I am exposing is a ShaderLinkage
, which is exactly the Linkage
in Slang. It stores the shader sources and reflection info, and can be used for specialized code generation.
If I remember correctly, the reason why need multiple translation units is to make sure that the global parameters for different HLSL entry points don't get mixed together?
If I remember correctly, the reason why need multiple translation units is to make sure that the global parameters for different HLSL entry points don't get mixed together?
It is more or less that, yeah. The problematic scenario is when a user has a shared shader parameter (often in a header):
// shared.h
cbuffer SharedCB { float4 u; }
float utilityFunc(float x) { return x; }
and then they include that in both their vertex and fragment shaders:
// vertex.hlsl
cbuffer VertexCB { ... }
#include "shared.h"
float4 main(...) : SV_Position { ... }
// fragment.hlsl
#include "shared.hlsl"
cbuffer FragmentCB { ... }
float4 main(...) : SV_Target { ... }
The average user expects to get code where SharedCB
gets a consistent register when emitting both the vertex and fragment kernels, because it is "obviously" the same declaration. Making that work requires that we compile both the vertex and fragment shaders together in the same compile request (and Program
), so that we can "see" the global parameters of both at once. Because of scoping issues, we cannot compile both vertex.hlsl
and fragment.hlsl
as the same translation unit, or else the utilityFunc
defined in shared.hlsl
would have a redefinition error (and there would be two different main
functions...).
Realistically, we are close to the point where we can drop all that fiddly behavior that is just there for HLSL compatibility in a fairly niche corner case (most HLSL input will have explicit regsiters, and will be compiled one entry point at a time; only Falcor ever really relied on the current behavior).
If we only had to worry about Slang input, I would strongly argue that a compile request should have only a single translation unit where you can dump in any number of .slang
files and they will all share a single global scope (so that symbols in one file can see those in another implicitly), which produces a single module. In that case you can just put the shared constant buffer declaration in any one of your files and it would Just Work.
Exposing the new API will make everything much cleaner to work with. I am currently implementing a cross-platform graphics library that sits above D3D and Vulkan for doing all the shader component stuff using Slang. One of things that I am exposing is a ShaderLinkage, which is exactly the Linkage in Slang. It stores the shader sources and reflection info, and can be used for specialized code generation.
Yeah, this sounds like the missing abstraction that we are failing to surface through the Slang API. I've been under water with other stuff, so I haven't had a chance to revisit this part of the API, but I may need to do so soon as part of some work I'm doing in Falcor.
If you decide you want to work on improving the Slang API and creating a PR, please let me know so that we don't duplicate each other's efforts. It may turn out to be easier for you to just keep doing your own ShaderLinkage
abstraction to insulate yourself from any churn, though.
I may not be able to work on this right now. But if I got time and if is is still not done by then I'll let you know before I start working on it.
Understood. Just having your feedback as a user/customer of the new API would be a big help, so I'll be sure to ping you when we get around to it.
We might have talked about this before, but I have forgotten what our story here is. Consider this scenario: an entry-point shader
ForwardPass_vs
andForwardPass_ps
are defined inForwardPass.slang
, where it defines atype_param TMaterial : IMaterial
. And then, the engine defines different material types in different.slang
files. When specializing the entry point shader, the engine calls thespSetGlobalGenericArgs
function to pass in the type name of the concrete material type.Unless there is anything changed recently, the issue I was having is that the entry point shader may not
import
the module that defines the given material type. This means that type checking will fail because the specified type cannot be found.We might be able to just patch up
check
to ensure that it also looks into all other translation units in the compilation request regardless of whether the module isimport
ed or not.