Closed zopsicle closed 1 week ago
(I am under the impression that the purpose of these modifiers is to specify how the matrices should be loaded from the constant and vertex buffers. Maybe I am mistaken and their purpose is something else.)
I don't think there is a reason to support column major memory layout for HLSL, because DirectX API doesn't support a way to upload a matrix in a column major memory layout from CPU to GPU.
We may need to print a warning for this.
Uploading matrices in column major layout is definitely supported. After all, the uploading API just takes an array of bytes. For instance:
class ColumnMajorMatrix
{
public:
float col1[4];
float col2[4];
float col3[4];
float col4[4];
};
ColumnMajorMatrix myMatrix{...};
D3D11_BUFFER_DESC{
.ByteWidth{64},
.Usage{D3D11_USAGE_IMMUTABLE},
.BindFlags{D3D11_BIND_CONSTANT_BUFFER},
.CPUAccessFlags{0},
.MiscFlags{0},
.StructureByteStride{},
};
D3D11_SUBRESOURCE_DATA initialData{
.pSysMem{&myMatrix},
.SysMemPitch{},
.SysMemSlicePitch{},
};
ComPtr<ID3D11Buffer> buffer;
auto result = device->CreateBuffer(&desc, &initialData, &buffer);
_com_util::CheckError(result);
Now, reading this matrix from the buffer in a shader works as expected if the shader is compiled with a default matrix layout of column-major (set by sessionDesc.defaultMatrixLayoutMode = SLANG_MATRIX_LAYOUT_COLUMN_MAJOR;
or the equivalent CLI option).
I see. I think we should fix it with the HLSL keywords, row_major
and column_major
described in the following link.
https://learn.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-variable-syntax
Consider the following shaders:
and their diff:
Now let's compile them to HLSL:
and compare the results:
The same code is generated for both; the matrix layout modifiers are ignored. As a result, the shader generates the wrong output.
Using Slang v2024.9.2.