KhronosGroup / glslang

Khronos-reference front end for GLSL/ESSL, partial front end for HLSL, and a SPIR-V generator.
Other
3.02k stars 834 forks source link

Generalizing option selection for gtests #279

Closed johnkslang closed 8 years ago

johnkslang commented 8 years ago
enum class Semantics {
    OpenGL,
    Vulkan,
};

// Enum for compilation target.
enum class Target {
    AST,
    Spirv,
};

@antiagainst I looked at adding HLSL tests here, and I think we need a different design. Generally, all these things can be orthogonal, much like this subset of mask flags:

enum TOptions {
    EOptionIntermediate       = 0x0001,
    EOptionRelaxedErrors      = 0x0008,
    EOptionGiveWarnings       = 0x0010,
    EOptionLinkProgram        = 0x0020,
    EOptionMultiThreaded      = 0x0040,
    EOptionDumpConfig         = 0x0080,
    EOptionDumpReflection     = 0x0100,
    EOptionSuppressWarnings   = 0x0200,
    EOptionDumpVersions       = 0x0400,
    EOptionSpv                = 0x0800,
    EOptionHumanReadableSpv   = 0x1000,
    EOptionVulkanRules        = 0x2000,
    EOptionReadHlsl          = 0x10000,
};

For example, in the Semantics and Target, I might want both AST and SPIR-V to be visible, or with HLSL input semantics and either OpenGL or Vulkan output semantics. So, while those last two output semantics are mutually exclusive, most the rest are generally orthogonal.

Should I start adding more and more enum class { } that cause expansion of the argument lists etc. everywhere? Or, switch to a mask design like TOptions has so that it's all wrapped up in a single parameter to pass around? In the end, it'll be almost as many enum classes as there are bits in TOptions, but more cumbersome.

Interested in your take on generalizing this design, and certainly open to anything you can do here, as it doesn't quite feel right to keep expanding the number of arguments to pass around everywhere.

Thanks.

antiagainst commented 8 years ago

Although we've implemented the bits to run file-based test cases in the googletest framework, I'd prefer to see actual unit tests for HLSL since that's the conventional way of using googletest. I'll push towards that direction and try to write some unit tests for the HLSL part.

But, yeah, sometimes we do need integration kind tests. So back to the question, I'd think, actually, more enum classes may be better. It's more orthogonal (each enum class represents a different dimension) and more clear from the signature of functions/methods. And it seems to me that we actually only need three such enum classes (Source, Semantics, Target) and that won't change for quite some time? So I've drafted PR #284 under such consideration. Feel free to share your thoughts. :)

johnkslang commented 8 years ago

Although we've implemented the bits to run file-based test cases in the googletest framework, I'd prefer to see actual unit tests for HLSL since that's the conventional way of using googletest.

Gtest is running the file-based tests much faster, so it's nice to take advantage of that speed when a file-based test is the best thing to do. (As might be the case to verify correct tree for mutually recursive functions making it.)

I'm in favor of unit tests when that's the right thing to do as well. Would like to see it in action. My conception is they are better taylor to something small (unit) with well-defined ins and outs, which is a bit counter to the current design of HLSL so far of: a global data base (tree, etc.) being incrementally grown by mutually recursive functions, where their ins and outs beyond their global effect are not the main point.

Note that with the speed increase, file-based tests can be much smaller and more focused on a single thing. (I suspect the unstable SSA id deltas in SPIR-V is what caused the biggest reaction that the tests weren't optimal, but historically, the tests were either stable tree based, or for getting list of semantic errors out.)

And it seems to me that we actually only need three such enum classes (Source, Semantics, Target)

That probably locks you into need the script-based runtests even longer into the future, as all the things to test like linking, multi-threading, functional-based memory leak testing, etc. are all orthogonal modes. The current gtest set up worked by locking down all the orthogonalities to just a few choices, making it much more limited than all the orthogonal command-line options.

I'm okay with that though.

Theoretically, emulated masks with combinatorial explosions, no matter how much better on any other measure, misses the point on the very reason something was a mask instead of a combinatorial explosion. Note if all that's needed by some tests is the sematic errors, you are missing the NietherAstNorSpirv mode. On the semantic's front, testing for RelaxedErrors and Warnings should be orthogonal to all the other semantics.

For today, adding an HLSL semantic and a BothAstAndSpirv gets by, so that's fine. Just note it's not a scalable design.

johnkslang commented 8 years ago

Maybe as a good example for discussion, see e5f29393da1c624e22b7bde9b51d5f3678d73859 and pick the file-based test hlsl.precedence2.frag. It shows the tree difference for opposing precedece chains in expressions. I find this very straightforward to have the one line of HLSL code turning into the correct tree. How would that look instead as a unit test?

antiagainst commented 8 years ago

My conception is they are better taylor to something small (unit) with well-defined ins and outs, which is a bit counter to the current design of HLSL so far of: a global data base (tree, etc.) being incrementally grown by mutually recursive functions, where their ins and outs beyond their global effect are not the main point.

Agreed. Unit tests is not the silver bullet for everything; it should be used where proper. After checking the HLSL component for a while (sorry didn't do that before actually making my previous comment), I find you are absolutely right. The mutating of a central data structure and its intertwining with TPpContext render unit testing difficult to write. Small feature based file tests is a nice tradeoff way to go.

That probably locks you into need the script-based runtests even longer into the future, as all the things to test like linking, multi-threading, functional-based memory leak testing, etc. are all orthogonal modes.

These are the cases I didn't quite foresee earlier; I only thought about error reporting level (RelaxedErorrs, Warnings) might be another dimension. Now it makes sense to me that the current design doesn't accommodate these cases well. My previous concern with a monolithic bitmask option is that it bundles options of multiple dimensions together (source language, output target, error reporting control, etc.), which is not quite "clean" and explicit. But yeah, it's more scalable. And since this is test code, ability to test things should be the primary goal.

For today, adding an HLSL semantic and a BothAstAndSpirv gets by, so that's fine. Just note it's not a scalable design.

Yeah, I agree. It's a temporary solution satisfying current needs. We should change to use the more scalable bitmask approach when the needs come.

I find this very straightforward to have the one line of HLSL code turning into the correct tree. How would that look instead as a unit test?

We actually discussed this issue a little bit earlier (with @dneto0 & @Qining). A possible way would be designing a tree-based matcher. The matcher specifies what are expected in the generated AST tree, in a format like node.NameIs(...).LeftChild(...).RightChild(...), and we can do the match against the result. But, it requires careful design, and compared to other tasks, it may not be the most fruitful and urgent one we should work on.

johnkslang commented 8 years ago

Current testing situation is working well now, closing.