I was testing a few things locally trying to enable/disable the use of "FFX_HLSL_6_2" and noticed that there is a bit of inconsistency in the code that can lead to shader build errors. Looks like generally the codebase (shaders) uses the "#if " approach implying that FFX_HLSL_6_2 would always need to be defined with either 0 or 1. However, in "ffx_core_hlsl.h", the code checks whether the symbol is defined or not.
When using FSR2 using the HLSL flow (FFX_HLSL)
I was testing a few things locally trying to enable/disable the use of "FFX_HLSL_6_2" and noticed that there is a bit of inconsistency in the code that can lead to shader build errors. Looks like generally the codebase (shaders) uses the "#if" approach implying that FFX_HLSL_6_2 would always need to be defined with either 0 or 1. However, in "ffx_core_hlsl.h", the code checks whether the symbol is defined or not.
`#if defined(FFX_HLSL_6_2) && !defined(FFX_NO_16_BIT_CAST)
define FFX_TO_FLOAT16(x) asfloat16(x)
define FFX_TO_FLOAT16X2(x) asfloat16(x)
define FFX_TO_FLOAT16X3(x) asfloat16(x)
define FFX_TO_FLOAT16X4(x) asfloat16(x)
else
define FFX_TO_FLOAT16(a) FFX_MIN16_F(f16tof32(FfxUInt32(a)))
define FFX_TO_FLOAT16X2(a) FFX_MIN16_F2(FFX_TO_FLOAT16((a).x), FFX_TO_FLOAT16((a).y))
define FFX_TO_FLOAT16X3(a) FFX_MIN16_F3(FFX_TO_FLOAT16((a).x), FFX_TO_FLOAT16((a).y), FFX_TO_FLOAT16((a).z))
define FFX_TO_FLOAT16X4(a) FFX_MIN16_F4(FFX_TO_FLOAT16((a).x), FFX_TO_FLOAT16((a).y), FFX_TO_FLOAT16((a).z), FFX_TO_FLOAT16((a).w))
endif // #if defined(FFX_HLSL_6_2) && !defined(FFX_NO_16_BIT_CAST)`